Multiple containers on GCP with Nginx, Let’s-Encrypt and a web-server.

GCP provides docker optimised compute nodes where you can spin up a very light weight (and minimal footprint) containers. They run this on the so called Container Optimised OS and seems to work great for simple use cases. However, there is a limitation: You can only auto-config it to run 1 single container at a time. The OS takes care of re-starting, port mapping etc, but this limitation sometimes makes it tough to run your hobby projects.

Lets try to go over a use case here. For this example, we take a project – Metabase, which runs a Jetty app server on port 3000.

Use Case

  1. You have your primary docker service (Metabase) running on port 3000.
  2. You want to run a nginx proxy on 80/443.
  3. You want to setup a certbot + LetsEncrypt SSL certificate on this server as well, so you have your service securely out in the open.

Prerequisites

  1. You have a GCP Compute node up with Metabase, and is running on port 3000. You can follow this tutorial here, in case you run into trouble.
  2. You have assigned a public IP to the node, and now your setup is accessible at http://your-public-ip:3000

Setup

At this point, you have a metabase container running on your GCP machine. You can login via SSH, or use the in-browser console service to see this using docker ps inside your compute node. Its time to setup the nginx reverse proxy at this point.

Starting with the basics, we need some nginx configurations. This config does two things:

  1. Start the nginx server on port 80/443. For this example, we use the host network itself, since GCP uses the host network when running containers.
  2. Point the certbot challenge for laters correctly to get your certs.
user_name@gcpnode $ mkdir nginx 
user_name@gcpnode $ nano nginx/nginx.conf

Now, setting up the nginx/nginx.conf is going to be easy. Lets see how it should look:

events {
  worker_connections  4096;  ## Default: 1024
}

http {
    log_format combined_ssl '$remote_addr - $remote_user [$time_local] '
                            '$ssl_protocol/$ssl_cipher '
                            '"$request" $status $body_bytes_sent '
                            '"$http_referer" "$http_user_agent"';
    server {
      listen 80;
      server_name subdomain.domain.com;
    
      location /.well-known/acme-challenge/ {
        root /var/www/certbot;
      }
    
      location / {
          return 301 https://$host$request_uri;
      }
  }

}

Now, that would be enough to serve the certbot challenges. Lets run the nginx server now. If it complaints about missing folders, just create it.

user_name@gcpnode $ docker run --network host -p 80:80 -p 443:443 -v 
/home/user_name/nginx/nginx.conf:/etc/nginx/nginx.conf -v 
/home/user_name/certbot/letsencrypt:/etc/letsencrypt -v 
/home/user_name/certbot/www:/var/www/certbot -d nginx

Now check if nginx is running correctly, using docker ps. If things are good, its time to run the certbot challenge.

user_name@gcpnode $ docker run --rm --name temp_certbot -v 
/home/user_name/certbot/letsencrypt:/etc/letsencrypt -v 
/home/user_name/certbot/www:/tmp/letsencrypt -v /home/user_name/servers-
data/certbot/log:/var/log certbot/certbot:v1.8.0 certonly --webroot --
agree-tos --renew-by-default --preferred-challenges http-01 --server
 https://acme-v02.api.letsencrypt.org/directory --text --email 
useremail@domain.com -w /tmp/letsencrypt -d subdomain.domain.com

There we go. If things go alright, you should have your certs in /home/user_name/certbot/letsencrypt.

Now that this is done, time to route some SSL traffic to your webserver. Lets edit the nginx config one more time, and add the necessary routes to your web-server. In this case, its a metabase running on the same machine with an HTTP out on port 3000.

Lets update the nginx config in nginx/nginx.conf again to route the reverse proxied traffic to your metabase installation.

events {
  worker_connections  4096;  ## Default: 1024
}

http {
    log_format combined_ssl '$remote_addr - $remote_user [$time_local] '
                            '$ssl_protocol/$ssl_cipher '
                            '"$request" $status $body_bytes_sent '
                            '"$http_referer" "$http_user_agent"';
    server {
        listen 80;
        server_name subdomain.domain.com;

        location /.well-known/acme-challenge/ {
        root /var/www/certbot;
        }

        location / {
          return 301 https://$host$request_uri;
        }
    }

    server {
        listen 443 ssl;
        server_name subdomain.domain.com;

        access_log /var/log/nginx/access.log combined_ssl;

        ssl_certificate /etc/letsencrypt/live/subdomain.domain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/subdomain.domain.com/privkey.pem;

        location / {
            set $upstream "site_upstream";

            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            proxy_set_header X-Real-Port $server_port;
            proxy_set_header X-Real-Scheme $scheme;
            proxy_set_header X-NginX-Proxy true;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Ssl on;

            expires off;
            proxy_pass http://$upstream;
        }
    }

    upstream site_upstream{
        server your-gcp-private-ip:3000;
    }
}

Make sure to update the highlighted code lines to match your install. Now, once these are done, its time to restart the nginx container again to enjoy the new HTTPS service. You can do that by something like:

user_name@gcpnode $ docker ps # and, note the nginx container id. 
user_name@gcpnode $ docker stop <container_id> # pass the right id.
user_name@gcpnode $ docker run --network host -p 80:80 -p 443:443 -v 
/home/user_name/nginx/nginx.conf:/etc/nginx/nginx.conf -v 
/home/user_name/certbot/letsencrypt:/etc/letsencrypt -v 
/home/user_name/certbot/www:/var/www/certbot -d nginx

# Hopefully, things start alright here. Check docker logs for clarity. 
user_name@gcpnode $ docker logs -f <container_id> # Use the new id.

Now you should have your metabase service running under subdomain.domain.com. Things to think about:

  1. Docker optimized OS on GCP uses the host network by default. Probably running them on a different separate network should be the best way to do it. The port forwarding in this example is kind of discarded, since we use network=host
  2. You should be pointing an A record in your DNS to point to the gcp-public-ip.
  3. You should not have to expose any ports outside the gcp firewall, other than HTTPS. Probably even block the HTTP access.
  4. Since metabase handles sensitive stuff, its always recommended to host it behind a secure access VPN/Cloudflare for teams access control.

Hope it helps.

[OpenStack] Get IPv4 address of a VM from compute object

Recently came across this scenario:

  1. I create a VM with conn.compute.create_server(*args) and have the server object. The server is allocated an IP over `DHCP`
  2. I want the IPv4 address of the machine.

Seems tough ? Finally found this one:

from openstack import connection

conn = connection.Connection(
    auth_url=configs['auth']['OS_AUTH_URL'],
    project_name=configs['auth']['OS_PROJECT_NAME'],
    username=configs['auth']['OS_USERNAME'],
    password=configs['auth']['OS_PASSWORD'],
    project_domain_name=configs['auth']['OS_PROJECT_DOMAIN_NAME'],
    user_domain_name=configs['auth']['OS_USER_DOMAIN_NAME']
)
# Define network, security_groups_list, user_data_file_opened
# An example network config is given below
network = {
  "name": "personal_network",
  "security_group":"open",
  "subnet": {
    "name": "personal_network_subnet",
    "ip_version": "4",
    "cidr": "10.10.60.0/24",
    "dns_servers":["8.8.8.8","8.8.8.4"],
    "gateway_ip": "10.10.60.1"
  }
}
network_ = [x for x in conn.network.networks(name=network['name'])][0]
node = conn.compute.create_server(
    name=server_name,
    image_id=image.id,
    flavor_id=flavor.id,
    networks=[{"uuid": network_.id}],
    key_name=keypair.name,
    security_groups=security_groups_list,
    user_data=user_data_file_opened
)
node_ = conn.compute.wait_for_server(node, wait=360)
node_ip = conn.compute.get_server(node.id).to_dict()['addresses'][network['name']][0]['addr']

print(f'New node ip is {node_ip}')

I have pasted the gist here, https://gist.github.com/tonythomas01/e7cecc6c1aaa4d4ca221487659ef9f40

Tell me how it goes, good luck.

Basic CRUD with Openstack Python V2.x clients

Last week I had this shiny assignment by a company here for a thesis interview to build some Python scripts using Openstack Python clients. It made me create some code which eventually got me through.  The recent upgrade of Openstack Python clients from API v1 to 2 have left most of the parts undocumented or carefully shredded here and there, so here you go.

Keystone: authenticate

In case you are just playing around with the default project domains, this should return the auth sessions.

from keystoneauth1.identity import v3
from keystoneauth1 import session

def authenticate_and_return_session(auth_url='', username=None, password=None):
    auth = v3.Password(
        auth_url=auth_url, username=username,
        password=password, project_name="demo", user_domain_id="default",
        project_domain_id="default"
    )
    return session.Session(auth=auth)

Keystone: list projects

from keystoneclient.v3 import client as keystoneclient

def list_projects(session=None):
    keystone = keystoneclient.Client(session=session)
    return keystone.projects.list()

Glance: list images and return the first one

You might want to now list out all the images to select what to chose

from glanceclient import Client

def list_images(session=None):
    # now check out images from glance

    glance = Client('2', session=session)

    image_ids = []
    for image in glance.images.list():
        image_ids.append(image.id)

    print('{0} Images found'.format(len(image_ids)))
    # Return the first image id
    return image_ids[0]

Nova: list flavors and return the `tiny` one

def get_your_flavor(flav_name='m1.tiny'):
    nova = novaclient.Client(2.1, session=sess)
    return nova.flavors.find(name=flav_name)

Neutron: Create network

This is one of the important steps, here we are trying to create a custom network and subnet on which our VM should reside


from neutronclient.v2_0 import client
def create_network(session, network_name='test_net')
    neutron = client.Client(session=session)

    return neutron.create_network(
        body={"network": {"name": network_name, "admin_state_up": True}}
    )

Neutron: Create your custom subnet

def create_subnet(neutronclient=None, net=None, cidr='192.168.2.1/24'):
    return neutronclient.create_subnet(
        body={
            'subnet': {
                'name': 'test_sub', 'network_id': net['network']['id'],
                'ip_version': 4, 'cidr': cidr, 'enable_dhcp': True
            }
        }
    )

Neutron: Connect new subnet to the default router

This would add an interface to the default router to our new subnet

sub = create_subnet(neutronclient, net=net)
neutron.add_interface_router(
    neutron.list_routers()['routers'][0]['id'],
    body={
        'subnet_id': sub['subnet']['id']
    }
)

Nova: Create your instance, connect your NIC card to new network

def create_instance(instance_ip_address='192.168.1.5'):
    nics = [
        {
            'net-id': net['network']['id'],
            'v4-fixed-ip': '{0}'.format(instance_ip_address)
        }
    ]
    instance = nova.servers.create(
        name='api-test', image=image, flavor=flav, nics=nics
    )
    if instance:
        print('Created: {0}'.format(instance))

thats it! You can see a better version here though. Leave a comment if you found this interesting.

WMHACK 17 – what can possibly happen in 3 days

Long story short – I am just back from the 2017 edition of Wikimedia Hackathon, and this time I got things to say. Keeping it a bit different from previous posts on hackathons, I am going to keep this short and steady so that you do not get bored too much. Cutting drama is something which I generally dont do, but this one really needs it.

More media of the event at: Wikimedia Commons

Wikimedia Hackathons are always a reminder of how little I know about the Mediawiki software, and working live with these dinosaur-mw-mentors are more than fun. You might wonder why the Foundation should fly us to a place when you can get the same done over IRC/any other medium. I can tell you why as I complete this post.

Background: We had a Google Summer of Code 2015 project to build and deploy a newsletter extension for Mediawiki – which never hit production yet. We were trying all our lucks at Wikimania hackathons, other summits etc – but we never were close enough. I do not blame this on anybody as for eg, we were almost in beta (one step before prod after WMHACK16, but Wikimania hackathon 2016 saw a huge change of codebase (shift to contenthandler), and we were back at level 0. It had always been a dream of a few community members like Quim Gil, Me and couple of others to get this thing to production – and we were trying hard with little luck.

My aim for the hackathon: Simple, but to get all security review blockers of Newsletter extension merged and get it deployable in production.

Day 0: Arrival, and other fun

Flying from berlin on an early hour flight – this went pretty smooth and I was a bit sleep deprived as well. Sadly I couldn’t get to my room until 13:00 – but I took onto working a bit of my things on my Igalia project – and killed the time. Later that night I remember even having a short dinner with Srishti Seth and spending the rest of the night mingling with various other mentors, etc. We even had some preparatory meetings – how good.

Day 1: Mentor-newcomer program

This year we had a new test of a mentoring program by the Hackathon organizers, and we had kind of like a pitching session and breakout session to induct in newcomers to projects. I had a whole list of newcomer tasks which were pitched to the newcomers as well. We created a couple of Telegram groups to co-ordinate, and I would say that the whole thing went smooth.

Day 2-3: What was that ?

I clubbed day 2 and 3 together as it went super fast and super hacky. I found out Brian Wolff who was lurking around fixing things here and there and made him sit down with the newsletter extension hacks. This turned out pretty well, as he was happy to help and that went all the way from Day 2, 10 o clock to Day 2 night 21:00. WOW. Some funny things which happened (mostly exaggerated, technically should sound fun):

  1. At one point – I was literally dictated on writing a new maintenance script by Brian (no laptop) and Florian (no laptop) together. This was equally embarrassing and informative.
  2. Srishti was there as well, and she fixed one of her first patch sets to Gerrit, which was for her embarrassment – adding an @author tag to one of the newsletter extension classes (I’m kidding here, as my first patch was a spelling mistake).
  3. Around 20:00 Nemo Bis tells me I should contribute to Malayalam translations via the Unicode web app and I get to vote on translations.
  4.  MtDu (a past GCI student) was there as well with us on the table – and he seemed to be enjoying me being dictated as well. He fixed a couple of other bugs for us – kudos for him.
  5. At around 22:00, Brian went up to take his laptop (that would mean that sleep was cancelled for the night for the table), but then at around 22:15, Florian made a statement that made a huge impact. It was something like “The party happens only tonight, and you can fix the code tomorrow too”. I remember packing back the machines, and even pulling Srishti (who apparently pulled in Andre) and what not – we had a huge team checking out the Hackathon party. I blame Florian, and almost felt like adding “Hackathon party” as a blocker for “Newsletter deployment to production”. Kidding again, but that was fun.
  6. On the way to hackathon party we rate ourselves on a 100 (work:show instead of Howard Stern 10). I am rated 60:40 for work:show while Florian on a 90:100 and Brian on a 110:-10. (more kidding, Florian had the worse ratings).

Day 3: We still have things left

We were closing down on things, and apparently Brian and me were stuck on a huge performance issue with the Newsletter tablepager listing. To make it less technical, the current query as he mentions “should be killed with fire as it can fry the database as it uses filesort and we wanted something which uses INDEXes or something cheaper”. Technically we started on this huge thing on Day 2 around 22:10, and this stayed with us all the way till EOD. Main improvements:

  1. 13:00 – we have most of the cases working, but we still havent got anywhere with our blocker patch
  2. <Missed the group photo somewhere in the middle> as we almost thought we were almost there.
  3. 15:00 – the showcase has started, and we were trying our best (mostly Brian) but we couldn’t showcase the whole thing as done.
  4. 17:00 – showcase finished, we find a better room with cool wind coming in to work on it and by around 18:45 – we have it fixed. WOW. It was such a moment to see the thing working and Brian explained his amusement with a weary ‘yaay’ (it was literally just that long).

and yay – we have all blockers resolved for the Newsletter extension security review! and a handful finished from our huge list of tiny tasks at T159081 as well.

Stats for the nerds:

  1. Gerrit changesets submitted during the hackathon for the newsletter extension.

Key Learnings

  • Hackathons work, as you get to make the mentors sit down with you patiently to get the community things done.
  • Telegram chat groups work, but mostly needs to be communicated a bit earlier. This happened during wmhack17, and happy for that.
  • Party on the second night can delay deployments – but luckily I pulled out before 1:00 am on the third morning.

Some funny chats during the hackathon:

Alright. So thats it, Vienna was hot, clean and great! Thanks to the amazing organizers again.

Note: Even though I was a signed in mentor for the mentor-newcomer task, I mostly became a newcomer from day 2 with these people. I tried to recruit in people to work on the rest of the tasks via Telegram, which brought in Srishti and MtDu to fixing things though.

Few updates after the blog was published.

“MtDu was the one who convinced me to go to the party with his SQL argument (If you LEFT JOIN the hackathon to the party, today the party has value; tomorrow it IS NULL). Although upon actually arriving I found my favourite anti-social computer nerd in a corner with her laptop complaining about lack of WiFi, and preceeded to continue to stare at a laptop screen during the party”  – bawolff