Skip to main content

Tomcat 7 remote deployment

I decided to provide automatic deployment of war packaged application using Jenkins and Deployment plugin. The target platform is Amazon with Tomcat 7, see nice set of articles to find out how to setup such environment for free.

Well, there is couple of tutorials but they missing some points so it pushed me to lost one hour of my work.

What I got

  • Fresh installation of Tomcat 7 on remote machine with opened 8080 port on firewall
  • Personal war file supposed to be deployed

How to push it to tomcat?


1. First of all, there is simple configuration of tomcat users in file tomcat-users.xml - it was my pain in the ass :-) As original comprehensive documentation says, it's necessary to define user, but which one(s)?

Here is working example of tomcat-users.xml:

<tomcat-users>
<user username="manager-gui" password="changeit" roles="manager-gui"/>
<user username="manager-script" password="changeit" roles="manager-script"/>
</tomcat-users>

The important part is manager-script, contrary to Tomcat 6 where it had not exist yet. This user allows access to /text sub-namespace in management uri. The first user called manager-gui is the one which you use in GUI console, e.g. http://localhost:8085/manager/html

Once you run tomcat using bin's bat file, you can move to second bullet.

2. Now, it's possible to use remote deployment using curl command, e.g. in my use-case:

curl --upload-file my.war "http://manager-script:changeit@localhost:8080/manager/text/deploy?path=/myPath&update=true"

The command is working using manager-script user contrary to my original manager-gui. Another interesting part is path=/myPath. This attribute say which URL sub-namespace is to be used.

Even if you deploy my.war and common Tomcat's approach is to deploy application in /my subname, the application is to be exposed on /myPath.
2 comments

Popular posts from this blog

Http and TCP Load Balancing with Kubernetes

Kubernetes allows to manage large clusters which are composed of docker containers. And where is large computation power there is large amount of data throughput. However, there is still a need to spread the data throughput so it can reach and utilize particular docker container(s). This is called load balancingKubernetes supports two basic forms of load balancing. Internal among docker containers themselves and external for clients which call you application from outside environment - just users.


Internal Load Balancing with Kubernetes Usual approach during the modeling of an application in kubernetes is to provide domain models for pods, replications controllers and services. Look at the great documentation if you are not aware of all these principles.

For simplification, pod is a set of docker containers which are always located on the same node. Replication controller allows and guarantees scalability. The last but not least is the service which hides all instances of pods - cre…

Validating nginx config file using docker approach

I try to setup nginx as a load balancer. The configuration is just a file with a lot of constrains so I need a validation. There is no online validation service, as e.g. CoreOS has, and I don't want to install nginx on my laptop as I work on a distributed app.

Docker is right approach for me. Let say I have following config:


In short, I'm going to pass nginx config to running nginx instance and look to the logs.

Put you nginx.config to the temp and start the docker image:

sudo docker run --name nginx -v /tmp/nginx.config:/etc/nginx/nginx.conf:ro -d nginx It uses volume mapping so the command just starts a new docker container and mounts a local /tmp/nginx.config to the given in-container path. You can obviously change the volume path to your personal path. Is it working or not? Look at logs.

sudo docker logs nginx If there is no entry, your file is fine. In the case of an error, you can see something like this:

2016/01/08 11:37:31 [emerg] 1#1: unexpected "}" in /etc/…