Set up a git repository and access with https

Git-Logo
In this blog post I show you how you can set up a git repository and access with https on a CentOS 7 System. The most tutorials I could find used the ssh protocol to access the repository. Not exactly what I wanted because I don’t want to create new linux system users for all the colleagues who only need access to the repository and to have only one git user for all the users together seems not to be a professional solution.
Fortunately a simple CGI program to serve the contents of a Git repository to Git clients accessing the repository over http:// and https:// protocols exists. The module is called git-http-backend. In this tutorial I’ll show you, how you can set up your own git repository server and access it with https and a self signed certificate from your eclipse installation. I’m sure you’ll save a lot of time if you read this blog post because there exists a lot of stumbling blocks and SELinux needs to be convinced that git-http-backend can work. As web server we use Apache2 (httpd) because git-http-backend is a CGI script. Theoretically Nginx can also handle CGI Scripts but you have to install an additional application server for your task, such as uWSGI for example. You can find more information about this topic here. To use Nginx instead of Apache2 makes it a lot more complicated. Believe me I tried by myself.

Install Git

# yum install git

After the installation you should find git-http-backend on your system, verify with:

# ls -al /usr/libexec/git-core/git-http-backend

Install Apache2

# yum install httpd

And make sure httpd starts on boot time:

# systemctl enable httpd

After the installation we need an additional apache module called mod_ssl. mod_ssl is needed for https access. Just install the module with:

# yum install mod_ssl

Next, verify the module installation. On redhat systems the apache configuration is by default held in one file /etc/httpd/conf/httpd.conf. All modules are loaded from this file, and can be disabled by commenting out the appropriate LoadModule statement. By default, all modules are loaded when they have beeen installed. If you don’t want to load a module, then edit the appropriate file in /etc/httpd/conf.modules.d and comment the load line with #.

Create virtual host

In the next step we create the virtual host my.domain.ch, replace my.domain.ch with your host for example git.test.ch and make sure git.test.ch points to your server where you have installed the apache daemon. Create the file /etc/httpd/conf.d/secure.my.domain.ch.conf with following content:

<IfModule mod_ssl.c>
<VirtualHost *:443>
    SSLEngine on

    SSLCertificateFile /data/git/ssl/git.cert.crt
    SSLCertificateKeyFile /data/git/ssl/git.cert.key
    SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown

    ServerName my.domain.ch
    ErrorLog /var/log/httpd/git-error.log
    CustomLog /var/log/httpd/git-access.log combined

    # GIT Config
    SetEnv GIT_PROJECT_ROOT /data/git/repositories
    SetEnv GIT_HTTP_EXPORT_ALL

    # Route Git-Http-Backend
    ScriptAlias / /usr/libexec/git-core/git-http-backend/

    # Require Acces for all resources
    <Location />
        AuthType Basic
        AuthName "Private"
        Require valid-user
        AuthUserFile /data/my.htpasswd
    </Location>
</VirtualHost>
</IfModule>

Create a git repository

Note the repository location within the configuration file above:

SetEnv GIT_PROJECT_ROOT /data/git/repositories

Now we create the necessary directory and initialise a first git repository:

# mkdir -p /data/git/repositories
# cd /data/git/repositories
# git --bare init my-project.git
# chown -R apache:apache /data/git/repositories

Create Certificate

Note the ssl certificate and ssl certificate key file location within the configuration file above:

SSLCertificateFile /data/git/ssl/git.cert.crt
SSLCertificateKeyFile /data/git/ssl/git.cert.key

Next we create the needed files, answer the question for “Common Name” with your real host, which you want to user for accessing the repository, e.g: my.domain.ch:

# mkdir /data/git/ssl
# cd /data/git/ssl
# openssl req -new > git.cert.csr
# openssl rsa -in privkey.pem -out git.cert.key
# openssl x509 -in git.cert.csr -out git.cert.crt  -req -signkey git.cert.key -days 3650

Create the password file

Notice the AuthUserFile within the configuration file above, this is the location where apache checks users and passwords for basic authentication create the file my.htpasswd as follows and provide a password when asked:

# yum provides \*bin/htpasswd
# yum install httpd-tools-2.4.6-18.el7.centos.x86_64
# htpasswd -c /data/my.htpasswd myuser

SELinux configurations

Try to restart apache now with:

# systemctl restart httpd.service

As you can see apache isn’t starting anymore, because SELinux is blocking the httpd daemon. A message like this occurs:

Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details.

Type the following for more information:

# systemctl status httpd.service
httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled)
   Active: failed (Result: exit-code) since Di 2014-12-30 16:27:00 CET; 9s ago
  Process: 12132 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
  Process: 12130 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)
 Main PID: 12130 (code=exited, status=1/FAILURE)

Dez 30 16:26:59 ??? httpd[12130]: AH00526: Syntax error on line 5 of /etc/httpd/conf.d/secure.my.domain.ch.conf:
Dez 30 16:26:59 ??? httpd[12130]: SSLCertificateFile: file '/data/git/ssl/git.cert.crt does not exist or is empty
Dez 30 16:26:59 ??? systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILURE
Dez 30 16:27:00 ??? systemd[1]: Failed to start The Apache HTTP Server.
Dez 30 16:27:00 ??? systemd[1]: Unit httpd.service entered failed state.

Also check the files /var/log/messages, /var/log/httpd/error_log and /var/log/audit/audit.log for additional information. Use the following command to generate human readable reports from audit.log.

# yum whatprovides sealert
# yum install setroubleshoot-server-3.2.17-2.el7.x86_64
# sealert -a /var/log/audit/audit.log > /path/to/mylogfile.txt

The error message above tells us /data/git/ssl/git.cert.crt does not exist or is empty. But both of them isn’t true. Also the file has read rights. Why apache throws this message? Yes the answer is, the SELinux security file context is wrong. If SELinux blocks an action, this is reported to the underlying application as a normal “access denied” type error to the application. Many applications, however, do not test all return codes on system calls and may return no message explaining the issue or may return in a misleading fashion. In fact the httpd daemon is not allowed to read the file /data/git/ssl/git.cert.crt. To solve this problem we can take over the security context from another file which already has the correct context:

chcon --reference=/etc/pki/tls/certs/ca-bundle.crt /data/git/ssl/git.cert.crt
chcon --reference=/etc/pki/tls/certs/ca-bundle.crt /data/git/ssl/git.cert.key
chcon --reference=/etc/pki/tls/certs/ca-bundle.crt /data/git/ssl/git.cert.csr

Or copy the files to the common locations and adapt your /etc/httpd/conf.d/secure.my.domain.ch.conf accordingly.

# cp /data/git/ssl/git.cert.crt /etc/pki/tls/certs/
# cp /data/git/ssl/git.cert.key /etc/pki/tls/private/
# cp /data/git/ssl/git.cert.csr /etc/pki/tls/private/

If you copy the files to the common locations, then the security context will be set automatically.
With the following command you can see the security context of a file:

# ls -alZ /data/git/ssl/git.cert.crt
-rw-r--r--. root root unconfined_u:object_r:cert_t:s0  /data/git/ssl/git.cert.crt

Make sure only root can read the certificate files:

# chmod 600 /data/git/ssl/git.*

After you have set the security context to cert_t the httpd daemon should be able to read the file. Try to restart the deamon now.

# systemctl restart httpd.service

No you should see the same problem with the AuthUserFile /data/my.htpasswd which is used for basic authentication. Use the following command to set the correct security context:

# chcon -t httpd_sys_content_t /data/my.htpasswd

Restart httpd again, httpd should now start without any problems.

Install a certificate

For a first test we use the command curl on the system where you have installed the git repository to verify the access to our repository, simply use:

curl -u myuser:'mypassword' https://my.domain.ch/my-project.git

Because we use a self signed certificate you should face the following error message:

curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

As the message tells you, you could turn off the certificate verification and probably you could do the same for your git client afterwards. Nevertheless this is not secure because the client will accept the certificate without verification. Your client sofware can’t be sure to talk with the permitted and desired endpoint server. The secure way is to install the certificate on the client, you can do that with:

$ update-ca-trust enable
$ cp /data/git/ssl/git.cert.crt /etc/pki/ca-trust/source/anchors/
$ update-ca-trust extract
$ systemctl reload httpd.service

Now try again:

# curl -u myuser:'mypassword' https://my.domain.ch/my-project.git

If everything works as expected, curl should return without a message nor an error message.

Access with a git client

Now we try to access our repository with a git client, provide the password when asked:

# git clone https://myuser@my.domain.ch/my-project.git

One more time you will see an error message saying something like:

fatal: repository 'https://myuser@my.domain.ch/my-project.git/' not found

If you check the error log /var/log/httpd/git-error.log you can see:

[cgi:error] [pid 12890] [client ???.???.???.???:?????] AH01215: Not a git repository: '/data/git/repositories/my-project.git'

Here again SELinux blocks, the process /usr/libexec/git-core/git is not allowed to access /data/git/repositories. You can find more information in the SELinux log file:

# tail -f /var/log/audit/audit.log | grep 'type=AVC'
type=AVC msg=audit(1420040376.300:54570): avc:  denied  { create } for  pid=983 comm="git" name="39" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=dir
type=SYSCALL msg=audit(1420040376.300:54570): arch=c000003e syscall=83 success=no exit=-13 a0=7af120 a1=1ff a2=7463656a626f2f2e a3=7fff1fb07c30 items=0 ppid=981 pid=983 auid=4294967295 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=4294967295 comm="git" exe="/usr/libexec/git-core/git" subj=system_u:system_r:httpd_t:s0 key=(null)

To solve this problem change the security context of /data/git/repositories recursively and permanent with:

semanage fcontext -a -t httpd_git_rw_content_t "/data/git/repositories(/.*)?"

Access with egit client in Eclipse

Do the following steps to clone your remote git repository.

  • Open the git configuration: Window -> Preferences -> Team -> Git -> Configuration and press the button Add Entry…
  • Add a new key http.sslVerify with value false
  • open the git perspective in Eclipse Window -> Open Perspective -> Other… -> Git
  • Press the Icon “Clone a Git Repository and add the clone to this view”
  • In the opening Wizard choose Clone URI and press next.
  • In the window “Source Git Repository” enter your repository uri for example: https://my.domain.ch/my-project.git
  • As Protocol choose https
  • Enter your user and password
  • Then press next until you reach the window “Local Destination”
  • Provide the local path in the field Directory, this is the path where you local git copy will be placed
  • Check the box “Import all existing projects after clone finishes” and press Finish

Did you notice the insecure step above? Yes the key http.sslVerify with value false is bad. If you access your git repository like this then you are not protected against Man-In-The-Middle Attacks. The better way is to import the certificate git.cert.crt from the server to your local keystore. Do the following steps to make it more secure.

  • Open the git configuration: Window -> Preferences -> Team -> Git -> Configuration and remove the key http.sslVerify
  • Try to make a push to your remote repository: Right mouse click on your eclipse project -> Team -> Remote -> Push…
  • Provide the repository uri for instance https://my.domain.ch/my-project.git and your user password combination and press next.
  • Now you will see an error message of course, because the certificate can’t be verified. The error message looks like:
    Transport Error: Cannot get remote repository refs.
    https://git.dropbit.ch/course-management.git: cannot open git-upload-pack
    
  • Copy the file git.cert.crt from the remote repository server to your local host where eclipse is running
  • Add git.cert.crt to your local keystore with the keytool command (you should find keytool in the java jdk bin directory):
    keytool -keystore C:\Java\jdk1.7.0_71\jre\lib\security\cacerts -storepass changeit -import -alias git -trustcacerts -v -file c:\tmp\git.cert.crt
    

    For your information, if you did something wrong you can delete a certificate from the keystore with:

    keytool -delete -noprompt -alias git -keystore C:\Java\jdk1.7.0_71\jre\lib\security\cacerts -storepass changeit
    
  • Close Eclipse and append the following to your eclipse.ini file:
    -Djavax.net.ssl.trustStore=C:\develop\Java\jdk1.7.0_71\jre\lib\security\cacerts
    -Djavax.net.ssl.trustStorePassword=changeit
    
  • Restart Eclipse and try again to push something to your remote git repository

The error message above should disappear now. I hope everything is working in your environment. If you have any questions or improvements, don’t hesitate to contact me. Thank you for reading my blog post.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Infrastructure

Private docker registry with Nginx on CentOS 7

docker-logo

Unfortunately the docker registry does not care about authentication. You could use the docker hub to push your own images to the public docker registry but this is not a very good idea for non open source projects. Also there is the possibilty to buy private repositories. In most cases an own private docker registry with SSL and authentication is desired. It took me a lot of time until everything was working on my server because there were a lot of stumbling blocks. I hope you can save some time by reading my blog post “Private docker registry with Nginx on CentOS 7”.

Installing the docker registry as a container

Basically you can install the registry in two different ways, either you can install it on a bare metal host or inside of a docker container. In this post we’ll use the container way which is very easy. Just type:

$ docker pull registry

Next create the store location and give the needed permissions:

sudo mkdir -p /data/docker/private-registry/storage
sudo chmod 750 /data/docker/private-registry/storage
sudo chown 10000:10000 /data/docker/private-registry/storage

And now you can run a container from the downloaded registry image by entering the following command.

$ docker run \
    -d \
    --name private_registry \
    -e SETTINGS_FLAVOUR=local \
    -e STORAGE_PATH=/registry-storage \
    -v /data/docker/private-registry/storage:/registry-storage \
    -u 10000 \
    -p 5000:5000 \
    registry

The above command will run the registry on port 5000 and store images on the host filesystem under /data/docker/private-registry/storage. Interesting to know that the docker images are not stored inside of a container. You need an extra store (partition) where your registry stores the images. For example the docker container store can be on /dev/vda3 and your registry store could be on /dev/vda4. In this example /data is the mounting point for /dev/vda4. As I installed the registry for my first time I made only one big partition with btrfs because I was thinking the images will be stored inside of a docker container but this is wrong.

With the command

$ docker ps

you should see the running containers and I was expecting to see the registry container but nothing was shown. So I checked also the not running containers with

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
6a78333cd3ec        registry:latest     "docker-registry"   2 minutes ago       Exited (3) 2 minutes ago                       private_registry

and could see my registry container. Actually I don’t know why the container is exited because in the run command the -d switch for daemon is used. But anywhy I started the container again with:

$ docker start private_registry

Then confirm again with

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
6a78333cd3ec        registry:latest     "docker-registry"   7 minutes ago       Up 53 seconds       0.0.0.0:5000->5000/tcp   private_registry

Fortunately the container is now running and listening on port 5000. Use curl to double check everything is allright.

$ curl localhost:5000
"\"docker-registry server\""

If you can see the string “docker-registry server” everything is ok. But what will happen after a system reboot? Perhaps the docker daemon will start again, because

$ systemctl enable docker.service

was used. But for sure the registry container will not be running anymore. You can solve this by using a systemd unit file. As shown below.

[Unit]
Description=Private Docker Registry
Author=chris.koller@dropbit.ch
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a private_registry
ExecStop=/usr/bin/docker stop -t 60 private_registry

[Install]
WantedBy=multi-user.target

and save it here /usr/lib/systemd/system/private-docker-registry.service. With the following command you should see your new created private-docker-registry service.

$ systemctl list-unit-files | grep docker
docker.service                              enabled
private-docker-registry.service             disabled
docker.socket	       

As you can see private-docker-registry.service is disabled currently, let’s change it with

$ systemctl enable private-docker-registry.service
ln -s '/usr/lib/systemd/system/private-docker-registry.service' '/etc/systemd/system/multi-user.target.wants/private-docker-registry.service'

After the above command the docker registry will be started after a system reboot automatically. Use the next steps to confirm everything works as exprected:

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
6a78333cd3ec        registry:latest     "docker-registry"   3 days ago          Up 30 minutes       0.0.0.0:5000->5000/tcp   private_registry

This tells us container private_registry is running and listening on port 5000. Now it’s time to connect to the private registry for the second time:

$ curl localhost:5000
"\"docker-registry server\""

If you can see the string “docker-registry server” then everything works as expected. At this time the private registry is completely open and insecure. If you use CentOS 7 however nobody can access the registry because the firewall firewalld will block port 5000.

Installing Nginx

Unfortunately the docker registry does not care about authentication you need to use a proxy to make your private registry secure. In this tutorial we use Nginx a
high concurrency, high performance and low memory usage open source reverse proxy server. You could also use an apache or something else. Use the following commands to install nginx and enable it for boot time:

$ rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
$ yum install nginx
$ systemctl enable nginx.service
$ systemctl start nginx.service

Because we want to access our private docker registry via http and https we neeed to open the http and https service in our firewall:

$ firewall-cmd --zone=public --list-all
$ firewall-cmd --permanent --zone=public --add-service=http 
$ firewall-cmd --permanent --zone=public --add-service=https
$ firewall-cmd --reload
$ firewall-cmd --zone=public --list-all

At this time you should be able to access your server’s ip address in a browser and see something like:

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Configure access through Nginx to your private docker registry

First create a file /etc/nginx/sites-available/secure.my.domain.ch, if the folder sites-available does not exist, then create it. Replace my.domain.ch with the domain which you want to use for accessing the private registry. Add the following content to the file afterwards.

# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary

upstream private-docker-registry {
 server localhost:5000;
}

server {
 listen 443;
 server_name my.domain.ch;

 #ssl on;
 #ssl_certificate /data/ssl/certs/my.domain.ch.crt;
 #ssl_certificate_key /data/ssl/private/my.domain.ch.key;

 proxy_set_header Host       $http_host;   # required for Docker client sake
 proxy_set_header X-Real-IP  $remote_addr; # pass on real client IP

 client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

 # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
 chunked_transfer_encoding on;

 location / {
     # let Nginx know about our auth file
     auth_basic              "Restricted";
     auth_basic_user_file    /data/ssl/docker-registry.htpasswd;

     proxy_pass http://private-docker-registry;
 }
 location /_ping {
     auth_basic off;
     proxy_pass http://private-docker-registry;
 }
 location /v1/_ping {
     auth_basic off;
     proxy_pass http://private-docker-registry;
 }

}

Notice the configuration /data/ssl/docker-registry.htpasswd; in the file secure.my.domain.ch above. This is the location where nginx checks users and passwords for basic authentication change the path if needed and create the file docker-registry.htpasswd as follows:

$ yum provides \*bin/htpasswd
$ yum install httpd-tools-2.4.6-18.el7.centos.x86_64
$ mkdir -p /data/ssl/
$ htpasswd -c /data/ssl/docker-registry.htpasswd myuser

Enter and notice a password if the prompt occurs. Next we have to make sure that our Nginx virtual host configuration file can be found. Open the file /etc/nginx/nginx.conf and add after the line “include /etc/nginx/conf.d/*.conf;” the following:

include /etc/nginx/sites-enabled/*;

Furthermore we need a symbolic link from sites-enabled to sites-availalbe:

$ mkdir -p /etc/nginx/sites-enabled
$ cd /etc/nginx/sites-enabled
$ ln -s /etc/nginx/sites-available/secure.my.domain.ch secure.my.domain.ch

After the configuration changes we have to tell nginx to reload the configuration files.

systemctl reload nginx.service

Now it’s time to access our private registry through nginx.

$ curl localhost:443

What a suprise we are not allowed to access the registry. That’s correct because we told our nginx virtual host to use basic authentication. Provide now your user and password and try again.

curl myuser:test@localhost:443

Mmmhh not a lot better now we get an Internal Server Error 500. What’s going on here this should work. Let’s see perhaps the nginx log file can tell us something.

tail -f /var/log/nginx/error.log
2014/12/22 14:15:01 [crit] 3877#0: *6 open() "/data/ssl/docker-registry.htpasswd" failed (13: Permission denied), client: 127.0.0.1, server: my.domain.ch, request: "GET / HTTP/1.1", host: "localhost:443"
2014/12/22 14:16:09 [crit] 3877#0: *7 open() "/data/ssl/docker-registry.htpasswd" failed (13: Permission denied), client: 127.0.0.1, server: my.domain.ch, request: "GET / HTTP/1.1", host: "localhost:443"
2014/12/22 14:17:14 [crit] 3877#0: *8 open() "/data/ssl/docker-registry.htpasswd" failed (13: Permission denied), client: 127.0.0.1, server: my.domain.ch, request: "GET / HTTP/1.1", host: "localhost:443"

My friend SELinux blocks here, we need to add a rule first.

$ yum install policycoreutils-python
$ grep nginx /var/log/audit/audit.log | audit2allow -m nginx > nginx.te
$ grep nginx /var/log/audit/audit.log | audit2allow -M nginx
$ semodule -i nginx.pp
$ rm -rf nginx.pp
$ rm -rf nginx.te
$ systemctl reload nginx.service
$ systemctl restart nginx.service

You can find more information about the rule above here. Now let’s try again.

curl myuser:test@localhost:443

And again something blocks our request from fulfilling, this time we face 502 Bad Gateway. The log /var/log/nginx/error.log shows the following this time.

2014/12/22 15:02:53 [crit] 4169#0: *7 connect() to 127.0.0.1:5000 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: my.domain.ch, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5000/", host: "localhost:443"
2014/12/22 15:02:54 [error] 4169#0: *10 no live upstreams while connecting to upstream, client: 127.0.0.1, server: my.domain.ch, request: "GET / HTTP/1.1", upstream: "http://private-docker-registry/", host: "localhost:443"

To avoiding this problem type:

$ setsebool -P httpd_can_network_connect true

All good things come in threes, type again:

curl myuser:test@localhost:443

And you should see again the famous string “docker-registry server”. What does this mean so far?
1. Nginx receives the http request on port 443 and proxies to the private docker registry.
2. Without providing user and password nginx prevents access.
3. Basic Authentication is in place.
4. No HTTPS (SSL) configured so far.

Configure Nginx to use ssl

Basic Authentication without ssl is not secure because the connection is unencrypted therefore we add ssl to our configuration.
First we need a self-signed SSL certificate. Since Docker currently doesn’t allow you to use self-signed SSL certificates this is a bit more complicated than usual. We will have to set up our system to act as our own certificate signing authority.
In the first step create a new root key with:

$ mkdir /tmp/certs
$ cd /tmp/certs
$ openssl genrsa -out dockerCA.key 2048

Then create a root certificate, you don’t have to answer the upcoming question, just hit enter.

$ openssl req -x509 -new -nodes -key dockerCA.key -days 3650 -out dockerCA.crt

Then create a private key for your Nginx Server:

$ openssl genrsa -out my.domain.ch.key 2048

Next a certificate signing request is needed. Answer the upcoming question for “Common Name” with the domain of your server, e.g: my.domain.ch. In this example you would access your private docker registry with my.domain.ch at the end. Don’t provide a challenge password.

$ openssl req -new -key my.domain.ch.key -out my.domain.ch.csr

Afterwards we need to sign the certificate request:

$ openssl x509 -req -in my.domain.ch.csr -CA dockerCA.crt -CAkey dockerCA.key -CAcreateserial -out my.domain.ch.crt -days 3650

Now open the file /etc/nginx/sites-available/secure.my.domain.ch again and look for the lines:

...
#ssl on;
#ssl_certificate /data/ssl/certs/my.domain.ch.crt;
#ssl_certificate_key /data/ssl/private/my.domain.ch.key;
...

Remove the hashtags and make sure ssl_certificate and ssl_certificate_key points to your newly generated my.domain.ch.crt and my.domain.ch.key files.
Since the certificates we just generated aren’t verified by any known certificate authority (e.g.: VeriSign), we need to tell any clients that are going to be using this Docker registry that this is a legitimate certificate. Let’s do this locally so that we can use Docker from the Docker registry server itself:

$ update-ca-trust enable
$ cp dockerCA.crt /etc/pki/ca-trust/source/anchors/
$ update-ca-trust extract
$ systemctl reload nginx.service

After that our host accepts the certificate and we should be able to access our private docker registry with https.

curl https://myuser:test@my.domain.ch

If everything works as expected you will face the famouse string “docker-registry server” again. Notice you need to access with your domain. localhost for example will not work.

$ curl https://myuser:test@localhost
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.

Use docker with your private remote docker registry

Now the time is ripe for end to end testing. I assume that you have installed a docker daemon somewhere and are able to enter docker commands, for example:

$ docker info

Now let’s try to requet the registry:

$ curl https://myuser:test@my.domain.ch

For sure you want to see something like:

curl: (60) Peer's certificate has an invalid signature.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.

Because we used a self-signed certificate we have to copy the certificate to the new client host. Copy the file dockerCA.crt from the docker registry host to the new client host to the directory /etc/pki/ca-trust/source/anchors/ and enter the following commands:

$ update-ca-trust enable 
$ update-ca-trust extract

And now try again:

$ curl https://myuser:test@my.domain.ch

If you can see the string “docker-registry server” it does mean you successfully connected to your private docker registry via https connection and basic authentication. Yes a secure connection is in place. Congratulation! Try to connect with docker now.

$ docker login --username='myuser' --password='test' --email="chris.koller@dropbit.ch" https://my.domain.ch

Hopefully you can see “Login Succeeded” now. If that is the case it was well worth the effort. Use the next docker commands to pull a docker image, tag it and push it to your registry.

$ docker pull busybox
$ docker tag busybox:latest my.domain.ch/mybusybox:latest
$ docker push my.domain.ch/mybusybox:latest

If you have some questions, don’t hesitate to contact me. Thanks for reading and share if you like it.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in docker

How to install and use docker with btrfs on CentOS 7

docker-logo

In this tutorial I want to show how you can install docker on CentOS 7 and using a BTRFS partition as underlying store.

Install docker

$ yum update
Geladene Plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
 * base: centos.mirror.sharkservers.co.uk
 * extras: centosmirror.netcup.net
 * updates: mirror.softaculous.com
10055 packages excluded due to repository priority protections
No packages marked for update

If you see a line like 10055 packages excluded due to repository priority protections, then a yum configuration is needed. The message means some packages are held by more than one repository. The priorities plugin choose packages from the highest-priority repository, excluding duplicate entries from other repos. If you don’t update first, then yum install docker will not work because of dependency problems. To make this to work you have to edit /etc/yum/pluginconf.d/priorities.conf and change the content to:

[main]
enabled=0
check_obsoletes=1

Now try again to install docker:

$ yum update
$ yum install docker

Now you can enable docker to start on boot time:

$ systemctl enable docker.service
ln -s '/usr/lib/systemd/system/docker.service' '/etc/systemd/system/multi-user.target.wants/docker.service'

Let’s check the status of the service:

$ systemctl status docker.service
docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
   Active: inactive (dead)
     Docs: http://docs.docker.com

Docker is still not running. Now reboot your machine or start docker with:

$ systemctl start docker.service

Afterwards we want to see some docker information to confirm everything works as expected:

$ docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
 Pool Name: docker-253:1-683-pool
 Pool Blocksize: 65.54 kB
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 307.2 MB
 Data Space Total: 107.4 GB
 Metadata Space Used: 733.2 kB
 Metadata Space Total: 2.147 GB
 Library Version: 1.02.84-RHEL7 (2014-03-26)
Execution Driver: native-0.2
Kernel Version: 3.10.0-123.el7.x86_64
Operating System: CentOS Linux 7 (Core)

Device mapper thin provisioning

By default docker uses the device mapper thin provisioning to manage Docker containers if AUFS is not available on the operating system. This is the case if you install docker on a CentOS 7 system for example. For the default storage type “device mapper” no additional configuration is needed. The drawback all the containers are stored in the root partition under /var/lib/docker. Enter the following to see more information on a default docker system.

$ sudo lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
fd0                            2:0    1    4K  0 disk 
sda                            8:0    0    8G  0 disk 
├─sda1                         8:1    0  500M  0 part /boot
└─sda2                         8:2    0  7.5G  0 part 
  ├─centos-swap              253:0    0  820M  0 lvm  [SWAP]
  └─centos-root              253:1    0  6.7G  0 lvm  /
sr0                           11:0    1 1024M  0 rom  
loop0                          7:0    0  100G  0 loop 
└─docker-253:1-25857769-pool 253:2    0  100G  0 dm   
loop1                          7:1    0    2G  0 loop 
└─docker-253:1-25857769-pool 253:2    0  100G  0 dm   

Notice the loopback mounted device. You can use docker in this way on a developer machine but don’t use it on a production system. Furthermore the size is 100GB maximum. The real disk usage is less, only so much your docker containers currently need. If you need a store more than 100GB or a faster one you can use a real device instead of a file-backed loop device. More about the device mapper can be found on Jérôme Petazzoni’s blog: Resizing Docker containers with the Device Mapper plugin.

Docker and btrfs

Btrfs is a new copy on write (CoW) filesystem for Linux, you can find more inforamtion about btrfs here. In the redhat developer blog you can read: btrfs seems the most natural fit for Docker. If you install a new centos operating system make sure to create a partition with the Btrfs filesystem and mount it to /var/lib/docker. If you have installed your system already then use the following commands (make sure vda3 or another empty partition exists):

$ systemctl stop docker
$ rm -rf /var/lib/docker
$ yum install -y btrfs-progs btrfs-progs-devel
$ mkfs.btrfs -f /dev/vda3 (caution this will delete all the datas on /dev/vda3!)
$ mkdir /var/lib/docker
$ echo "/dev/vda3 /var/lib/docker btrfs defaults 0 0" >> /etc/fstab
$ mount -a

If you are not sure about your harddisk partitions you can use the following commands to show, delete or create new partitions:

$ cat /proc/partitions
major minor  #blocks  name

 253        0 1023410176 vda
 253        1    7168000 vda1
 253        2    1024000 vda2
 253        3  419430400 vda3
 253        4  595786752 vda4
  11        0    1048575 sr0
# df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
/dev/vda1       6,7G    1,2G  5,1G   19% /
devtmpfs        7,8G       0  7,8G    0% /dev
tmpfs           7,8G       0  7,8G    0% /dev/shm
tmpfs           7,8G    8,3M  7,8G    1% /run
tmpfs           7,8G       0  7,8G    0% /sys/fs/cgroup
/dev/vda3       400G    512K  398G    1% /var/lib/docker
/dev/vda4       560G     73M  531G    1% /data
fdisk /dev/vda

Now adapt the docker configuration for usingn btrfs, it should look similar to the following afterwards:

# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs
#OPTIONS=--selinux-enabled -H fd://
OPTIONS=-H fd:// -D -s btrfs

# Location used for temporary files, such as those created by
# docker load and build operations. Default is /var/lib/docker/tmp
# Can be overriden by setting the following environment variable.
# DOCKER_TMPDIR=/var/tmp

Because btrfs does not currently support SELinux the OPTIONS line don’t have the switch –selinux-enabled anymore. -s btrfs forces the Docker runtime to use the btrfs storage driver. That’s all now start the docker daemon and check the status by typing:

$ systemctl start docker
$ systemctl status docker

Now confirm the new store type with:

$ docker info
Containers: 0
Images: 0
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.10.0-123.13.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
Debug mode (server): true
Debug mode (client): false
Fds: 10
Goroutines: 11
EventsListeners: 0
Init SHA1: c906504aa058139c1d0569ecd0aa5f462a73440f
Init Path: /usr/libexec/docker/dockerinit

Now try to pull a docker image and run a container from it.

$ docker pull busybox
$ docker images
$ docker run -it --rm busybox

Type exit and enter, if you want stop the busybox container. More docker commands can you find here.

Now I hope you are able to use docker with btrfs. Thanks for reading my blog and drop me a mail, if you have any questions.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in docker

How to Clone & Reuse VirtualBox Disk-Images under Linux

When working with Oracle VirtualBox you might have come accross situations, where you wanted to copy the state of a given Disk-Image and build a new Virtual Machine upon that. We tell you, hot to do that.

a) Preparations

Before starting to make manipulations make sure, you have a recent backup of your VM with all its files.

Then create a new Machine of the desired OS – which is the one you have an exisitng Disk-Image for. You can create new Machines using the UI application ‘Oracle VM VirtualBox Manager’:

Create Virtual Machine_002

How to Create a new Virtual Machine

By now, a new folder would have been created in your VirtualBox VMs directory, which is something like: ‘/home/phabi/VirtualBox VMs’. Now you can switch to the folder of your desired source VM, i.e. ‘/home/phabi/VirtualBox VMs/myoldVM/’ where you would find the source Disk-Image to be copied, named something like ‘old_disk_image.vdi’. [Attention: Copying this image only takes the Base Disk-Image, which this VM is based on. If you’re having Snapshots, this information will not be contained in the Disk-Image unless you merge them back to the root node.]

b) Copy your existing Disk-Image

Now take that *.vdi Disk-Image and copy it to the newly created location, say you name it ‘new_disk_image.vdi’, in our case ‘/home/phabi/VirtualBox VMs/mynewVM’.

If you’d continue now with ‘Oracle VM VirtualBox Manager’ the application would complain, that the same Disk-Image is already in use. Oracle VirtualBox Manager doesn’t allow you to run VMs using Disk-Images providing the same Identifier (UUID). That’s why you have to assign it a new UUID using Oracle’s VBoxManage-Utility, located somewhere like ‘/usr/lib/virtualbox’.

c) Assign a new UUID to your Disk-Image

To change the UUID of the newly copied Disk-Image open a Terminal-Window and enter the following command, provided the VBoxManage-Utility location is contained in your $PATH variable, otherwise cd to its location and adapt the arguments accordingly:

phabi # ./VBoxManage internalcommands sethduuid "/home/phabi/VirtualBox VMs/mynewVM/new_disk_image.vdi"
UUID changed to: 214d8f33-3ba9-4123-b646-0d7eabef38a1
phabi #

If the VirtualBox is not located in your homedirectory you might need to run the command under root privileges.

d) Finish the VM-Creation Process

Then switch back to the ‘Oracle VM VirtualBox Manager’. In the following Screen select the RAM to be assigned to the new VM, click [Next]  and in the Screen titled ‘Hard drive’ select the option (x) Use an existing virtual hard drive file  and select the disk you have just copied, then click [Create].

That’s it. Now you should have your new VM ready based on that new Disk-Image.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Build Process, Infrastructure, Software Engineering, Uncategorized

We at dropbit are fond of docker

docker-logo

Build, Ship and Run Any App, Anywhere. We don’t have to worry about where our applications are running. If we neeed to do a server update, we don’t have to fear anymore. We can move away the applications running on this server very quickly. Then we can do server updates and afterwards if everything went well, we move back our applications to the updated server. With docker our application infrastructure became much simpler. dropbit therefore decided to share our experience with docker. From now on we will blog regularly about docker. In this first blog we just show some important commmands, you have to now. We are sure you will become fond of docker too. It’s just a matter of time.

Image Commands

pull an image from the docker hub

The docker hub is a big collection of base images , which you can use for your applications.

docker pull motiejus/systemd_centos7

show local images

docker images

build an image from an own Dockerfile

docker build -t myimage .
docker build -no-cache -t myimage .

delete an image

docker rmi myimage

delete all images

sudo docker rmi $(sudo docker images -f "dangling=true" -q)

container commands

show running containers

docker ps

show all containers

docker ps -a

run a container from an image

docker run --name myimage_instance -i -t myimage:latest

run a container in the background

docker run -d --name myimage_instance -i -t myimage:latest

stop a container

docker stop myimage_instance

delete a container

docker rm myimage_instance

delete all containers

docker rm $(docker ps -a -q)

miscellaneous

To get the container’s ip address, run the 2 commands

docker ps
docker inspect container_name | grep IPAddress

push an image to a local repository

docker tag bd393c80a9ca localhost:5000/dropbitbase
docker push localhost:5000/dropbitbase

linking container together

https://docs.docker.com/userguide/dockerlinks

This overview was thought just for our team initially. But why not sharing the list with you. If you have some more important commands to add to the overwiew, then just let us know.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in docker, Infrastructure

How to install Apache Karaf 3.0.1 as a daemon

apache karaf 3.0.1 as daemon
In this tutorial I show you how you can install Apache Karaf 3.0.1 as a daemon (service) on your linux system.

Download and install Apache Karaf 3.0.1

First go to the location where you want to install Apache Karaf 3.0.1.

$ cd /usr/share

Then download the newest Karaf Binary Distribution

$ sudo wget http://mirror.switch.ch/mirror/apache/dist/karaf/3.0.1/apache-karaf-3.0.1.tar.gz

In the next step we unpack the gz file.

$ sudo tar -zxvf apache-karaf-3.0.1.tar.gz

Now you can start the karaf with

$ cd apache-karaf-3.0.1/bin
$ sudo ./karaf

If you can see the following lines, then karaf started successfully and you can use the karaf console for the next steps.

karaf: JAVA_HOME not set; results may vary
        __ __                  ____
       / //_/____ __________ _/ __/
      / ,<  / __ `/ ___/ __ `/ /_
     / /| |/ /_/ / /  / /_/ / __/
    /_/ |_|\__,_/_/   \__,_/_/

  Apache Karaf (3.0.1)

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown Karaf.

karaf@root()>

Now Karaf is running but not as a Linux Daemon, as soon as you leave the Karaf console with

'<ctrl-d>', 'system:shutdown' or 'logout'

the Karaf will stop. In the next step we’ll start Karaf as a Daemon for production use.

Apache Karaf 3.0.1 as a Daemon for production use

Fortunately Karaf has a feature to install Karaf as a deamon.

karaf@root()> feature:install service-wrapper

Afterwards check whether the feature service-wrapper has been installed correctly or not.

karaf@root()> feature:list | grep service-wrapper
service-wrapper               | 3.0.1            | x         | standard-3.0.1          | Provide OS integration (alias to wrapper feature)

If you can see the char x then everything went fine. Now use the new command to install the service wrapper.

karaf@root()> wrapper:install
Creating file: /usr/share/apache-karaf-3.0.1/bin/karaf-wrapper
Creating file: /usr/share/apache-karaf-3.0.1/bin/karaf-service
Creating file: /usr/share/apache-karaf-3.0.1/etc/karaf-wrapper.conf
Creating file: /usr/share/apache-karaf-3.0.1/lib/libwrapper.so
Creating file: /usr/share/apache-karaf-3.0.1/lib/karaf-wrapper.jar
Creating file: /usr/share/apache-karaf-3.0.1/lib/karaf-wrapper-main.jar

Setup complete.  You may wish to tweak the JVM properties in the wrapper configuration file:
        /usr/share/apache-karaf-3.0.1/etc/karaf-wrapper.conf
before installing and starting the service.


RedHat/Fedora/CentOS Linux system detected:
  To install the service:
    $ ln -s /usr/share/apache-karaf-3.0.1/bin/karaf-service /etc/init.d/
    $ chkconfig karaf-service --add

  To start the service when the machine is rebooted:
    $ chkconfig karaf-service on

  To disable starting the service when the machine is rebooted:
    $ chkconfig karaf-service off

  To start the service:
    $ service karaf-service start

  To stop the service:
    $ service karaf-service stop

  To uninstall the service :
    $ chkconfig karaf-service --del
    $ rm /etc/init.d//usr/share/apache-karaf-3.0.1/bin/karaf-service

As written above install the service on Linux. First create the service symbolic link.

sudo ln -s /usr/share/apache-karaf-3.0.1/bin/karaf-service /etc/init.d/

Add the new service to the chkconfig management.

sudo chkconfig karaf-service --add

Tell chkconfig to start the karaf-service on boot time.

sudo chkconfig karaf-service on

If you start karaf with

sudo service karaf-service start

then you can read:

Startng karaf...

Now check whether the karaf is really running.

$ ps -ef | grep karaf
devteam   7685  4013  0 14:08 pts/0    00:00:00 grep karaf

On my machine the karaf isn’t running, the question is why? Fortunately there is a wrapper log file available with more information.

$ cat /usr/share/apache-karaf-3.0.1/data/log/wrapper.log
STATUS | wrapper  | 2014/09/04 14:02:50 | --> Wrapper Started as Daemon
STATUS | wrapper  | 2014/09/04 14:02:51 | Launching a JVM...
ERROR  | wrapper  | 2014/09/04 14:02:51 | Unable to start JVM: No such file or directory (2)
ERROR  | wrapper  | 2014/09/04 14:02:51 | JVM exited while loading the application.
STATUS | wrapper  | 2014/09/04 14:02:55 | Launching a JVM...
ERROR  | wrapper  | 2014/09/04 14:02:55 | Unable to start JVM: No such file or directory (2)
ERROR  | wrapper  | 2014/09/04 14:02:55 | JVM exited while loading the application.
STATUS | wrapper  | 2014/09/04 14:02:59 | Launching a JVM...
ERROR  | wrapper  | 2014/09/04 14:02:59 | Unable to start JVM: No such file or directory (2)
ERROR  | wrapper  | 2014/09/04 14:02:59 | JVM exited while loading the application.
STATUS | wrapper  | 2014/09/04 14:03:03 | Launching a JVM...
ERROR  | wrapper  | 2014/09/04 14:03:03 | Unable to start JVM: No such file or directory (2)
ERROR  | wrapper  | 2014/09/04 14:03:03 | JVM exited while loading the application.
STATUS | wrapper  | 2014/09/04 14:03:07 | Launching a JVM...
ERROR  | wrapper  | 2014/09/04 14:03:07 | Unable to start JVM: No such file or directory (2)
ERROR  | wrapper  | 2014/09/04 14:03:07 | JVM exited while loading the application.
FATAL  | wrapper  | 2014/09/04 14:03:07 | There were 5 failed launches in a row, each lasting less than 300 seconds.  Giving up.
FATAL  | wrapper  | 2014/09/04 14:03:07 |   There may be a configuration problem: please check the logs.
STATUS | wrapper  | 2014/09/04 14:03:07 | <-- Wrapper Stopped
 

So what happened? Karaf needs java to run and was trying to use the JVM. But the JVM can not be found! Above Karaf told us where we can find the wrapper configuration file.
Please open the file now.

sudo vi /usr/share/apache-karaf-3.0.1/etc/karaf-wrapper.conf

And search for the following lines.

#********************************************************************
# Wrapper Properties
#********************************************************************
set.default.JAVA_HOME=null

For some reasons JAVA_HOME is set to null here and further down in the configuration file you will find this line.

wrapper.java.command=%JAVA_HOME%/bin/java

Of course the command null/bin/java can not be found and the log file tells us:

Unable to start JVM: No such file or directory (2)

Change the line

set.default.JAVA_HOME=null

to something with more sense for example

set.default.JAVA_HOME=/usr/java/latest

or whereever your java has been installed.
Now try to start Karaf again.

sudo service karaf-service start

Let's check whether Karaf is really running or not.

$ ps -ef | grep karaf
root      7987     1  0 14:28 ?        00:00:00 /usr/share/apache-karaf-3.0.1/bin/karaf-wrapper /usr/share/apache-karaf-3.0.1/etc/karaf-wrapper.conf
root      7989  7987 18 14:28 ?        00:00:14 /usr/java/latest/bin/java -Dkaraf.home=/usr/share/apache-karaf-3.0.1 -Dkaraf.base=/usr/share/apache-
r:/usr/share/apache-karaf-3.0.1/lib/karaf.jar:/usr/share/apache-karaf-3.0.1/lib/karaf-jmx-boot.jar:/usr/share/apache-karaf-3.0.1/lib/karaf-jaas-boot
devteam   8045  4013  0 14:29 pts/0    00:00:00 grep karaf

Yes this time Karaf runs smoothly. But have you seen the ugly thing here? The Karaf process is running under user root. This is always a bad idea, never run a service as root!
Go and stop the karaf service.

sudo service karaf-service stop

In the next part we'll change the configuration to start Karaf as user karaf.

Start Apache Karaf 3.0.1 as user karaf and not as root

Add a new user group.

$ sudo groupadd karaf

Add a new user.

$ sudo useradd -s /bin/bash -g karaf karaf

Now change the owner of the apache-karaf-3.0.1 directory to karaf.

$ sudo chown -Rf karaf.karaf /usr/share/apache-karaf-3.0.1

Open the karaf service file and search for RUN_AS_USER.

sudo vi /etc/init.d/karaf-service

Change the line to.

RUN_AS_USER=karaf

and start karaf again.

sudo service karaf-service start

Check the Karaf process.

sudo ps -ef | grep karaf
karaf     8715     1  0 14:52 ?        00:00:00 /usr/share/apache-karaf-3.0.1/bin/karaf-wrapper /usr/share/apache-karaf-3.0.1/etc/karaf-wrapper.conf wrapper.syslog.ident=karaf wrapper.pidfile=/usr/share/apache-karaf-3.0.1/data/k
karaf     8717  8715 69 14:52 ?        00:00:03 /usr/java/latest/bin/java -Dkaraf.home=/usr/share/apache-karaf-3.0.1 -Dkaraf.base=/usr/share/apache-karaf-3.0.1 -Dkaraf.data=/usr/share/apache-karaf-3.0.1/data -Dkaraf.etc=/usr/sha
r:/usr/share/apache-karaf-3.0.1/lib/karaf.jar:/usr/share/apache-karaf-3.0.1/lib/karaf-jmx-boot.jar:/usr/share/apache-karaf-3.0.1/lib/karaf-jaas-boot.jar:/usr/share/apache-karaf-3.0.1/lib/karaf-wrapper-main.jar:/usr/share/apache-
devteam   8751  4013  0 14:52 pts/0    00:00:00 grep karaf

As you can see above, Apache Karaf runs under user karaf and as a daemon (service) now. After a system reboot karaf will be started automatically.
Thank you for reading my blog entry and please share if you like it.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Infrastructure

How to change ssh port on CentOS 7

Recently one of our server has welcomed us with ugly messages like:

Last failed login: Fri Aug 22 19:31:42 CEST 2014 from xx.xxx.xxx.xx on ssh:notty
There were 17307 failed login attempts since the last successful login.
Last login: Wed Aug 13 11:11:55 2014 from yyy-yyy-yyy-yyy

The simplest way to prevent such attacks is to change the ssh port. Mostly this attacks are coming from stupid robots trying to hack an open ssh port (22) with many thousand tries.
Do the following steps to change the ssh server port on your machine:

  • Edit /etc/ssh/sshd_config and uncomment the port line to something like: “Port 4444”
  • Because CentOS 7 is a Security-Enhanced Linux (SELinux) you have to tell SELinux that running ssh on the new Port 4444 is allowed. This can be done by using the command “semanage”.
  • On a minimal CentOS 7 System the command “semanage” is missing therefore install it with “sudo yum install policycoreutils-python”
  • Afterwards you can use “semanage port -a -t ssh_port_t -p tcp 4444”, now SELinux allows sshd to listen on the new port 4444.
  • Check the configuration with “semanage port -l | grep ssh” and you should see something like:

    ssh_port_t tcp 4444, 22

  • Actually the old port 22 you don’t need any longer and you could have the idea to delete it from SELinux with “semanage port -d -t ssh_port_t -p tcp 22” but this isn’t possible and you will see the message:

    ValueError: Port tcp/22 ist in der Richtlinie festgelegt und kann nicht gelöscht werden

  • Now restart the ssh daemon with “systemctl restart sshd.service”. Because CentOS 7 is a systemd-based OS you have to use the systemctl command to start, stop, restart services. In earlier versions you would have used “service sshd restart” for example. Systemd is just another process manager. CentOS 6 have used upstart or older versions have used System V init.
  • After the restart check the SELinux log for problems with “tail -f /var/log/secure”. Everything should be fine, if you can see something like:

    Aug 22 22:54:38 xxx sshd[2309]: Server listening on 0.0.0.0 port 4444.
    Aug 22 22:54:38 xxx sshd[2309]: Server listening on :: port 4444.

  • Don’t forget the firewall. At this point you would be able to connect to your ssh server with “ssh -p 4444 myuser@x.xx.xxx.xx” if your firewall does not refuse the connection. Just open the port in your firewall to allow access from outside to your sshd. On CentOS 7 iptables was replaced by firewalld. Read in my next blog how to disable firewalld and activate iptables on CentOS 7 again.
  • If you want to check whether sshd is really listening on the new port then use netstat. Because on a minimal CentOS 7 System netstat isn’t installed by default use: “sudo yum install net-tools” and then use “netstat -tulpn | grep :4444”.

    tcp 0 0 0.0.0.0:4444 0.0.0.0:* LISTEN 2309/sshd
    tcp6 0 0 :::4444 :::* LISTEN 2309/sshd

My biggest problem to change the ssh port was our firewall iptables on the CentOS 7 system. I tried to open the new port with “iptables -A INPUT -p tcp –dport 4444 -j ACCEPT” and this worked. I could login via ssh on the new port until the machine was restarted. After the restart login on the port 4444 was not possible anymore. I managed to enter “iptables -A INPUT -p tcp –dport 4444 -j ACCEPT” and access was permitted again. Due to the fact that CentOS 7 has a new firewall called firewalld, iptables was not starting on boot time and the rule “iptables -A INPUT -p tcp –dport 4444 -j ACCEPT” was not performed at boot time. After that I decided to uninstall firewalld and installed iptables fully. From now on everything workded as expected. If you have similar problems don’t hesitate to contact me. My email can be found on the dropbit website.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Data Security

WLAN dead after Microsoft Windows System Updates

This post is for the minority of users, whose netbook’s WLAN facility has been killed by an automatic Microsoft* update and who still have a chance accessing the internet – i.e. by using a Linux-based device – to search for a fix. If you just found your WLAN dead after Microsoft Windows* System Updates, you might want to read on.

 Malicious Microsoft Windows Updates

Malicious Microsoft Windows Updates

Problem

Just half an hour ago I agreed to let Windows 7* install the proposed System Updates. After rebooting my SAMSUNG Ativ Book 940X3G**, WLAN connection failed. All there was shown with the WLAN-Icon: “limited Network access”. The self-diagnostic didn’t come up with a ‘solution’, all other Laptops (not running Windows*) kept working and connecting wherever I wanted them to. Then I checked the recently installed Microsoft* Updates:

  • KB2981580
  • KB2980245 (I think, that’s our evildoer)
  • KB2952664

There were tons of other updates, but these were related to Office* vulnerabilities and the like, so I assumed one of these three to be the cause for the recent destruction of my Laptop’s WLAN-facility. Since there is no information available on Microsoft*’s Sites by today, the best thing to do is to remove all of them and wait, until a fixe’s fix is available.

Solution

The solution worked in my configuration, it might be tied to my System (Windows 7 Professional*, 64-Bit, SP1, Automatic Updates installed coming in on August, 28th, 2014) and Laptop (SAMSUNG Ativ Book 940X3G**), just after having installed the presumably corrupt update(s) from Microsoft*. 

CAUTION: If you have newer more recent updates here you possibly might have to find another solution, since soon Microsoft* will fix that fix and so the reason for your WLAN malfunction might be caused by another fix.

1) hit ‘Windows*’-key and enter the search text: ‘View installed updates’.

2) find the entries from August 18th, 2014, named:

  • KB2981580
  • KB2980245
  • KB2952664

3) in my case these updates were the most recent ones installed concerning the ‘Microsoft Windows*’ software-updates group.

4) get rid of them by double-clicking every one of them and confirming the ‘uninstall’-question.

5) restart your machine.

That’s it, your computer should then be again ready to work with.

*MicrosoftWindows und die entsprechendenMicrosoft Produktnamen sind eingetragene Warenzeichen der Microsoft Corporation in den Vereinigten Staaten und/oder in anderen Ländern.
**Samsung Ativ sind Handelsmarken von Samsung Electronics Co., Ltd. und kann eine eingetragene oder nicht eingetragene Handelsmarke sein.

 

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Knowledge

Confusing Comparable- & Comparator-Contract

Read here more about the Confusing Comparable- & Comparator-Contract in Java today. I recently came accross the following exception, thrown by the Java Runtime Environment:

“Comparison method violates its general contract!”

The Exception occurred in a Comparable-implementation looking similar as the following code, when Collections.sort() was invoked on an ArrayList of the Foo type:

public class Foo implements Comparable<Bar> {

...

  @Override
  public int compareTo(Bar bar) {
    Date thisDate = this.getCreateDate();
    Date otherDate = bar.getCreateDate();
    if (thisDate.after(otherDate)) {
      return -1;
    }
    return 1;
  }
  ...
}

I was a bit confused, because as I knew a comparator doesn’t necessarily have to be consistent with equals (see here). The same situation we have when implementing the Comparable interface (see here).

Until Java 1.6, sort functionality was fine with equal-inconsistent Comparator- / Comparable-implementations. Even if your Comparable / Comparator implementations returned the same non-zero integer value for equal objects, Collections.sort(..) would have been working fine. Not now, after timsort – a better-performing sorting algorithm by Tim Peter in Python – was introduced by Oracle with Java 7 to be the default sorting algorithm, whose implementation obviously needs the Comparable- and Comparator-interfaces to be consistent with equals. After not being able to reproduce the Exception mentioned above for some time, I finally managed to write a sample Project, which is able to demonstrate the issue. You need three classes:

File 1: CollectionsAutomatedTest.java

import java.util.ArrayList;
import java.util.Collections;
import java.util.List;

public class CollectionsAutomatedTest {

	private static final int CST_MAX_INT = 10;
	private static final int CST_MAX_SIZE = 100000;

	public static void main(String[] args) {
		System.setProperty("java.util.Arrays.useLegacyMergeSort", "true");
		List<Foo> fooList = new ArrayList<>();

		while (fooList.size() < CST_MAX_SIZE) {
			int i = (int) (Math.random() * CST_MAX_INT);
			fooList.add(new Foo(i));
		}
		System.out.println("now sorting list...");

		// a) first try sort using Comparable implementation of Foo
		try {
			Collections.sort(fooList, new FooComparator());
			System.out.println("successful sorted by Comparable<T>.");
		}
		catch (IllegalArgumentException e) {
			System.out.println("boooom! Sort by Comparable<T> fails, if not consistent with equals.");
		}

		// b) second try sorting using Comparator implementation of FooComparator
		try {
			Collections.sort(fooList, new FooComparator());
			System.out.println("successful sorted by Comparator<T>.");
		}
		catch (IllegalArgumentException e) {
			System.out.println("boooom! Sort by Comparator<T> fails, if not consistent with equals.");
		}
	}
}

File 2: Foo.java

public class Foo implements Comparable<Foo> {

	private int orderId = 0;

	public Foo(int orderId) {
		this.orderId = orderId;
	}

	public int getOrderId() {
		return orderId;
	}

	@Override
	public int compareTo(Foo o) {
		if (this.orderId < o.getOrderId()) {
			return -1;
		}
		return 1;
	}

}

File 2: FooComparator.java

import java.util.Comparator;

public class FooComparator implements Comparator<Foo> {

	@Override
	public int compare(Foo o1, Foo o2) {
		if (o1.getOrderId() < o2.getOrderId()) {
			return -1;
		}
		return 1;
	}

}

Findings

When running the main() under a Java 7 based runtime environment – I tested it with JDK 1.7.0_21 – you’ll get the following output:

now sorting list...
boooom! Sort by Comparable<T> fails, if not consistent with equals.
boooom! Sort by Comparator<T> fails, if not consistent with equals.

 

When running the main() under a Java 1.6 based runtime environment – I tested it with JDK 1.6.0_37 – you’ll get the following output:

now sorting list...
successful sorted by Comparable<T>.
successful sorted by Comparator<T>.

This only proves empirically, that the Sort-Functionality requires the Comparator- and Comparable-interfaces to be consistent with equals to work as expected under JDK 1.7.0_21, but not under JDK 1.6.0_37. To demonstrate, that this issue is related to timsort, let us run the application under JDK 1.7.0_21 and disable timsort using the useLegacyMergeSort-system property. Just add the following line to be the first line within the main-method:

System.setProperty("java.util.Arrays.useLegacyMergeSort", "true");

and run the application again. This would produce the following output even under JDK 1.7.0_21:

now sorting list...
successful sorted by Comparable<T>.
successful sorted by Comparator<T>.

Consequences

In words: The introduction of timsort changed the behaviour of Collections.sort() / Arrays.sort(). Starting from Java 7 you have the following options:

  • either make sure, your Comparator- / Comparable-interfaces are consistent with equals (see example beneath) or
  • use the ‘-Djava.util.Arrays.useLegacyMergeSort=true’ switch to disable timsort

To make the compareTo()-implementation consistent with equals, just make sure, that the method not only provides negative / positive integers for smaller / bigger object but also returns 0 for objects where a.equals(b) or b.equals(a) would return true. To continue with the example from the beginning (where no date may ever be equal to null), this would turn out to be:

public class Foo implements Comparable<Bar> {

...

  @Override
  public int compareTo(Bar bar) {
    Date thisDate = this.getCreateDate();
    Date otherDate = bar.getCreateDate();
    if (thisDate.after(otherDate)) {
      return -1;
    } else if (thisDate.before(otherDate)) {
      return 1;
    }
    return 0;
  }
  ...
}
Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Software, Software Architecture, Uncategorized

Backing-up your most valuable Data

In this previous post you found instructions, how to setup a USB-stick to contain an additional hidden encrypted partition to store valuable data on. Here you learn how to use your secured USB-stick and backing-up your most valuable data, based on a simple rsync script.

Step 1: Protect my local Data: Yes or No?

The first question is, whether you consider your local data on your PC / Mac to be secured enough to prevent unauthorized access or not. Since we are using laptops which are known to be portable, we consider the data to be unsafe. If you are positive, that no unauthorized user can access your computer, make sure all your valuable data is stored in a specific folder and proceed to step 2 of this tutorial, otherwise read on.

The aim of this step is to create a new local encrypted filecontainer which lateron can be mounted as a local encrypted drive. This container has to be big enough to fit on your USB-stick and to contain your valuable data. It must be slightly smaller or equal the space on the hidden USB-partition created before. To to this, follow the steps of our previous post, but instead of creating a volume within a partition/drive, choose the option ‘create encrypted file container’.

Create an encrypted file container

Then choose ‘Standard TrueCrypt volume’ (unless you are really paranoid and want to have a fake drive which would be exposed by mounting the file with another password, not unveilling the secret data but rather showing some uninteresting or fake-secret content you don’t care to present to any alice as well as bob).

Now select the volume location. Make sure, your (loca) harddrive provides enough space to fit another big file in the size of your hidden USB-Stick partition, prepared in the other post. Then repeat the steps as described under ‘Encryption Options’.

After having finished this step, you should have three newly created devices on your desktop:

  • one small unencrypted drive on your USB-stick (visible to anyone plugging in the stick)
  • one larger hidden and encrypted drive on your USB-stick
  • one large encrypted drive in a file on your harddrive

Step 2: Create your rsync-Script

In this step we want to create a simple rsync script which copies the data from your local location to the stick. Everytime when running the script, all your local data found under a given directory will be synchronized to your USB-stick. All changes applied to your stick will be overridden, data is only synchronized in one direction from PC / Mac to USB-stick.

Determine your top source folder
If you decided to store your valuable files locally in an encrypted filecontainer, you will have to mount it previously to run the script. Within TrueCrypt, select the filecontainer on your local harddrive, select a free slot from the volume-list, click the ‘Mount’-button and enter the password/select the keyfile. After having successfully mounted your filecontainer, you’ll see a new drive icon on your desktop and your top source folder would be something like

'/Volumes/SOURCE_SEC_DATA/'

given SOURCE_SEC_DATA was the devicename you assigned to your filecontainer. Otherwise, if you did decide against encrypting you local files, you’ll simply have a predefined folder, whose content you want to have recursively copied to the secured USB-stick. Your top source folder would then be something like

'/Users/UserName/PrivateData/'

given UserName is your name and PrivateData the name of the folder in your user-home where you have moved all your valuable data into.

Determine your top destination folder
Our destination folder will be the top-level of our mounted hidden partition on our USB-stick, which we have already prepared in the previous post. Within TrueCrypt, select the partition on your USB-stick connected, select a free slot from the volume-list, click the ‘Mount’-button and enter the password/select the keyfile. After having successfully mounted your hidden partition, you’ll see a new drive icon on your desktop and your top destination folder would be something like

'/Volumes/DEST_SEC_DATA/'

given SEC_DATA_STICK is the devicename you assigned to your device contained in the hidden partition on your USB-stick.

Write your shell script
With the preparation steps from the last section, your shell script should look something like:

#!/bin/sh
rsync -az --delete /Volumes/SOURCE_SEC_DATA/ /Volumes/DEST_SEC_DATA/

The –delete option tells rsync to delete any file on the destination folder (on your USB-stick), which have previously been removed from your source folder. If you do not want to have files removed from your USB-stick, remove the ‘–delete’ section from the statement. Attention: Be careful to provide the rsync – parameters in the correct order and always check your script against a set of dummy files prior to run it for the first time.

Make your script runnable
Store your script on your desktop with a name like ‘doSomeThing.command’. From the commandline run

sudo chmod +x /Users/UserName/Desktop/update_stick.command

Congrats! Now you can see the script on your desktop and can rename it or add a nice icon to it. After clicking your icon, all delta are synched from your source-folder to your destination-folder.

Share on FacebookShare on LinkedInShare on Google+Tweet about this on TwitterEmail this to someone
PDF herunterladen
Posted in Data Security, Knowledge