Performance monitoring of JMX monitoring Java applications in Docker containers

Article Directory

I. Introduction

When configuring docker and JMX monitoring today, I saw a detail that is different from the JMX configuration in a non-container environment. So write it here for others to refer to.

Second, the problems encountered

1. Problem phenomenon

Under normal circumstances, we configure JMX as long as we write the following parameters.

The following are the JMX configuration parameters for password-free monitoring (the configuration with password monitoring is no different from regular monitoring)

-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9998
-Djava.rmi.server.hostname=<serverip>
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false

But this error will occur when configuring this way in the docker container.

Insert picture description here

2. Problem analysis

The logic will be explained here. Why is this so?

First look at the network structure of the docker environment.

The container uses the default network model, which is the bridge mode. In this mode, it is the DNAT rule made during docker run to realize the ability of data forwarding. So the network information we see is as follows:

Network card information in docker:

[[email protected] /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.3  netmask 255.255.0.0  broadcast 0.0.0.0
        inet6 fe80::42:acff:fe12:3  prefixlen 64  scopeid 0x20<link>
        ether 02:42:ac:12:00:03  txqueuelen 0  (Ethernet)
        RX packets 366  bytes 350743 (342.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 358  bytes 32370 (31.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Routing information in docker:

[[email protected] /]# netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         gateway         0.0.0.0         UG        0 0          0 eth0
172.18.0.0      0.0.0.0         255.255.0.0     U         0 0          0 eth0
[[email protected] /]#

Corresponding network card information on the host:

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
		inet 172.18.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
		ether 02:42:44:5a:12:8f  txqueuelen 0  (Ethernet)
		RX packets 6691477  bytes 498130
		RX errors 0  dropped 0  overruns 0  frame 0
		TX packets 6751310  bytes 3508684363 (3.2 GiB)
		TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Routing information on the host machine:

[[email protected] ~]# netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         gateway         0.0.0.0         UG        0 0          0 eth0
link-local      0.0.0.0         255.255.0.0     U         0 0          0 eth0
172.17.208.0    0.0.0.0         255.255.240.0   U         0 0          0 eth0
172.18.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
192.168.16.0    0.0.0.0         255.255.240.0   U         0 0          0 br-676bae33ff92

Therefore, the host and the container can communicate directly, even if the port is not mapped. As follows:

[[email protected] ~]# telnet 172.18.0.3 8080
Trying 172.18.0.3...
Connected to 172.18.0.3.
Escape character is '^]'.

In addition, because it is bridged, there are virtual network card device information similar to veth0b5a080 on the host, such as:

eth0b5a080: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 42:c3:45:be:88:1a  txqueuelen 0  (Ethernet)
        RX packets 2715512  bytes 2462280742 (2.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2380143  bytes 2437360499 (2.2 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

This is the virtual network card pair veth pair, one in the docker container, and one in the host. In this mode, there are several containers, and there will be several virtual network card devices starting with veth on the host.

But if it is not accessed by the host, it will definitely not work. As shown in the figure below:

Insert picture description here


when we use the monitoring machine to visit, it will be like this:

Zees-Air-2:~ Zee$ telnet <serverip> 8080
Trying <serverip>...
telnet: connect to address <serverip>: Connection refused
telnet: Unable to connect to remote host
Zees-Air-2:~ Zee$

Because 8080 is a port opened by the container, not a port opened by the host, other machines cannot access it. This is why the port is mapped out for remote access. After the port is mapped, there will be NAT rules to ensure that the data packet is reachable.

Check the NAT rules and you will know. as follows:

[[email protected] ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 171 packets, 9832 bytes)
	pkts bytes target     prot opt in     out     source               destination
	553K   33M DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 171 packets, 9832 bytes)
	pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 2586 packets, 156K bytes)
	pkts bytes target     prot opt in     out     source               destination
	205K   12M DOCKER     all  --  *      *       0.0.0.0/0           !60.205.104.0/22      ADDRTYPE match dst-type LOCAL
	0     	0 DOCKER      all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 2602 packets, 157K bytes)
 pkts bytes target     prot opt in     out     source               destination
 265K   16M MASQUERADE  all  --  *      !docker0  172.18.0.0/16        0.0.0.0/0
	0     0 MASQUERADE  all  --  *      !br-676bae33ff92  192.168.16.0/20      0.0.0.0/0
	0     0 MASQUERADE  tcp  --  *      *       192.168.0.4          192.168.0.4          tcp dpt:7001
	0     0 MASQUERADE  tcp  --  *      *       192.168.0.4          192.168.0.4          tcp dpt:4001
	0     0 MASQUERADE  tcp  --  *      *       192.168.0.5          192.168.0.5          tcp dpt:2375
	0     0 MASQUERADE  tcp  --  *      *       192.168.0.8          192.168.0.8          tcp dpt:8080
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.4           172.18.0.4           tcp dpt:3306
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.5           172.18.0.5           tcp dpt:6379
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.2           172.18.0.2           tcp dpt:80
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.6           172.18.0.6           tcp dpt:9997
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.6           172.18.0.6           tcp dpt:9996
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.6           172.18.0.6           tcp dpt:8080
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.3           172.18.0.3           tcp dpt:9995
	0     0 MASQUERADE  tcp  --  *      *       172.18.0.3           172.18.0.3           tcp dpt:8080

Chain DOCKER (3 references)
	pkts bytes target  prot opt   in     out     source               destination
	159K 9544K RETURN  all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
	0    0 RETURN      all  --  br-676bae33ff92 *  0.0.0.0/0            0.0.0.0/0
	1    40 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:3307 to:172.18.0.4:3306
	28  1486 DNAT      tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:6379 to:172.18.0.5:6379
    228 137K  DNAT     tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:91 to:172.18.0.2:80
	3   192 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9997 to:172.18.0.6:9997
	0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9996 to:172.18.0.6:9996
	0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9002 to:172.18.0.6:8080
	12   768 DNAT      tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9995 to:172.18.0.3:9995
	4   256 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9004 to:172.18.0.3:8080

[[email protected] ~]#

We see the data of the 91 host ports will be passed to 172.18.0.2port 80. The 3307 host will be passed to port 172.18.0.43306 port.

Speaking of this, what does it have to do with JMX. The pain is the pain, JMX is like this:

Insert picture description here


it uses parameters when registering jmxremote.port, and then returns a new port jmxremote.rmi.port.

The parameter is used when calling the service jmxremote.rmi.port. As mentioned earlier, because the port of docker in bridge mode must be explicitly specified with -p, otherwise there is no NAT rule and the data packet is unreachable. So in this case, it can only jmxremote.rmi.portbe exposed to. So it must be specified explicitly. If it is not specified, this port will be opened randomly. The randomly opened ports do not have NAT rules, so they will not work.

Three, the solution

Therefore, this situation can only be specified over jmxremote.rmi.porta fixed value, and to expose. The configuration is as follows:

-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9995
-Djava.rmi.server.hostname=<serverip>
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.rmi.port=9995

Like the above setting, both are 9995, which is allowed. In this case, the registered and called ports are merged.

This is necessary when starting the docker container again.

docker run -d -p 9003:8080 -p 9995:9995 --name 7dgroup-tomcat5
-e CATALINA_OPTS="-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=9995 \
-Djava.rmi.server.hostname=<serverip> \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.rmi.port=9995" c375edce8dfd

Then you can connect to the JMX tool.

Insert picture description here


Insert picture description here


Insert picture description here


In a network environment with firewalls and other devices, the same problem may also occur. After understanding the registration and invocation logic of JMX, various similar problems can be solved.

Network link is a technical point that people who do performance analysis must want to understand, so so much has been said before.

Four, summary

Here are a few words about the choice of JMX tools. Some people like fancy, some like simple, and some like black windows. I think when choosing a tool, it depends on the applicable situation. When analyzing the performance, we must choose the right tool, not the tool that reflects the high level of technology.

Last assignment:

If docker run, if specified -p 19995:9995, is exposed to another port, other configurations are the same. Can JMX tools still be connected?

If you jmxremote.rmi.portand jmxremote.portdo not merge, and at the same time the two ports are exposed to, other configurations are the same. Can JMX tools still be connected?

If you are interested, you can try it yourself.