A real quick quick start with Google Cloud Platform command line tools

Getting started with Google Cloud Platform (GCP) is actually very easy but as with getting started with anything sometimes you just want a quick 101 of essential steps to set you on your merry way.

Note this post is to help you get up and running as quickly as possible but it does assume that you have read best practices for configuring permissions on gcp

Developing and deploying applications on GCP is arranged around projects and thus understanding how you intend to set up development and admin access for projects is an important initial step.

The next thing to do is to sign up for a Google Cloud account via the sign up page Sign Up for Google Compute Engine

Now that you are ready to get started start a terminal session .

Now  you’ve signed up download the GCloud SDK following the instructions for your OS gcloud compute

Then go through the authorisation process by typing

gcloud auth login

The GCloud SDK uses Google’s OAuth 2.0 service to authorize users and programs to access Google APIs.
Managing authentication and credentials

The GCloud SDK actually bundles the individual command line tools for each service as well as the appropriate SDK’s for each supported language . The moduarlarity is great but can be a bit confusing at first.

So what exactly do you get when you download the Google Cloud Platform SDK?
If you’re following along type the following:

gcloud components list

This gives you a list of the modules for each service

For those components you do not have installed or need an update use the command

gcloud components update component-name


so for example to update the App engine SDK for GO I would type:

gcloud components update gae-go


Next make sure you are working in the correct project


gcloud config list

to check what the currently set default project has been set to

You can set a different default project by typing

gcloud config set project YourProjectID

You can run this at anytime to reset the default project. If you are working on more than one project you will need to specify the non default project appropriately and where or when you do this depends on the comamnd.

Note: you will probably have to activate any services you need to use for a particular project by using the console, making sure you are in the project you wish to activate the service for then selecting API’s under API’s and Auth and setting the status to on for the services you want activated for the project

Each GCP product has it’s own set of commands and you need to use the appropriate set of commands to interact with the service.
See the list produced from the

gcloud components list

For example to interact with BigQuery you use bq and for Cloud Storage you use gsutil.

Below is an example showing how easy it is to get started by listing out the set of commands that I used to upload a csv file to Cloud storage usng gsutil, using bq to load the data into Bigquery, and to start querying that data.

First I created a schema as this is needed to pass to bq when creating the table
Then I uploaded my data set to cloud storage:

gsutil cp PeriodicTableDataSet.csv gs://my-periodicatable-bucket

Next I created a Big table dataset called elements

bq mk elements

Then in a single command I created a table and loaded it with my dataset

bq load elements.ptable gs://my-periodictable-bucket/PeriodicTableDataSet.csv

This returns a success status like he one below if everything okay
Waiting on bqjob_r1c83caf93cc4a0db_0000014a0057532d_1 … (27s) Current status: DONE

So Now I could start querying my data after just  3 steps (4 if you include creating the schema) once I had signed up

bq query "SELECT name, symbol from elements.ptable where Z >100"

Waiting on bqjob_r64487038b0ec017d_0000014a005b650e_1 ... (0s) Current status: DONE
|      name      | symbol |
| Mendelevium    | Md     |
| Nobelium       | No     |
| Lawrencium     | Lr     |
| Rutherfordium  | Rf     |
| Dubnium        | Db     |
| Seaborgium     | Sg     |
| Bohrium        | Bh     |
| Hassium        | Hs     |
| Meitnerium     | Mt     |
| Darmstadtium   | Ds     |
| Roentgenium    | Rg     |
| Ununbium       | Uub    |
| Ununtrium      | Uut    |
| Ununquadium    | Uuq    |
| Ununpentium    | Uup    |
| Ununhexium     | Uuh    |
| Ununseptium    | Uus    |
| Ununoctium     | Uuo    |

Help is avaiable by typing Command –help or comamnd help
Thus for cloudstorage you would type gsutil --help for BigQuery bq --help

So as you can see within a few minutes of  setting up your  account you are ready to start using the command line tools for GCP and getting immediate pay back.


Google Cloud Platform and the choices to be made on how to deploy an application

The Cloud gives you plenty of choices but this is a double edged sword as deciding how to architect your solution and what is the best way to deploy can lead to some hair tearing times. I keep my hair short for a reason! 😃

This post will not help with any of those decisions though all it will do is walk you through deploying the same application ( jenkins) on a single cloud platform – Google cloud platform ( GCP) in different ways using the Gcloud command line tools.

The cool thing is that each method literally takes minutes! Personally I’m a big fan of immutable infrastructures and trying to minimise errors by using scripts so it won’t be a suprise that I like the most hands off Docker approach ( even if I detest YAML) but Ill leave it to you to decide which method best suits you.

Note this assumes you have some familiarity with the Google Cloud SDK ( If not look out for my 101 post) It also assumes some familiarity with basic Docker commands

Method 1 : Installing direct to a GCP instance

First deploy an instance

gcloud compute instances create jenkins-instance --image debian-7 --zone us-central1-a

Grab the external IP

gcloud compute instances list

Connect to the instance

gcloud compute ssh jenkins-instance

Install jenkins:

wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -

sudo apt-get update
sudo apt-get install Jenkins

sudo bash -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ >> /etc/apt/sources.list

next set up firewall rules to expose port 8080 ( note you should make this very restrictive initially so you can set up securty initially)

gcloud compute firewall-rules create allow-http --description "Incoming http allowed." --allow tcp:8080

Check the firewall rules have been set up okay

gcloud compute firewall-rules list

You can now access the jenkins admin web interface via the external Ip address on port 8080

Method 2 : Using a Container-optimized Google Compute Engine image interactively

You need to select a Container-optimized Google Compute Engine image .

List the available versions

gcloud compute images list --project google-containers

This will list the available container optimised images. Select an appropriate image name ( In this walkthrough I select the default Google container optimised image)

NAME                                PROJECT           ALIAS              DEPRECATED STATUS
container-vm-v20141016              google-containers container-vm                  READY

centos-6-v20141108                  centos-cloud      centos-6                      READY

centos-7-v20141108                  centos-cloud      centos-7                      READY
coreos-alpha-509-1-0-v20141124      coreos-cloud                                    READY
coreos-beta-494-1-0-v20141124       coreos-cloud                                    READY
coreos-stable-444-5-0-v20141016     coreos-cloud      coreos                        READY
backports-debian-7-wheezy-v20141108 debian-cloud      debian-7-backports            READY
debian-7-wheezy-v20141108           debian-cloud      debian-7                      READY
container-vm-v20141016              google-containers container-vm                  READY
opensuse-13-1-v20141102             opensuse-cloud    opensuse-13                   READY
rhel-6-v20141108                    rhel-cloud        rhel-6                        READY
rhel-7-v20141108                    rhel-cloud        rhel-7                        READY
sles-11-sp3-v20140930               suse-cloud        sles-11                       READY
sles-11-sp3-v20141105  `             suse-cloud        sles-11                       READY
sles-12-v20141023                   suse-cloud                                      READY
ubuntu-1204-precise-v20141031       ubuntu-os-cloud   ubuntu-12-04                  READY
ubuntu-1404-trusty-v20141031a       ubuntu-os-cloud   ubuntu-14-04                  READY
ubuntu-1410-utopic-v20141030a       ubuntu-os-cloud   ubuntu-14-10                  READY

Start a container optimised instance

gcloud compute instances create jenkins-instance  --image container-vm-v20141016  --image-project google-containers  --metadata-from-file google-container-manifest=containers.yaml  --zone us-central1-a  --machine-type f1-micro

Note that you need to declare the project that the image you select to deploy the image is associated with

once the instance is up and running ssh into it and install jenkins by pulling the official jenkins repo down and exposing port 8080

gcloud compute ssh jenkins-cnt-vm
sudo docker pull jenkins:latest
sudo docker run -p -d -t jenkins
sudo docker  run -p  8080:8080 -d -t jenkins

Listing running Docker instances

sudo docker ps 
CONTAINER ID        IMAGE                     COMMAND                CREATED             STATUS              PORTS                               NAMES
2c8dfb26da3a        jenkins:latest            "/usr/local/bin/jenk   10 seconds ago      Up 9 seconds        50000/tcp,>8080/tcp   jovial_thompson
d7d799d93d55        google/cadvisor:latest    "/usr/bin/cadvisor"    32 minutes ago      Up 32 minutes                                           k8s_cadvisor.417cd83c_cadvisor-agent.file_4da26b48
3d719fdc322e        kubernetes/pause:latest   "/pause"               33 minutes ago      Up 33 minutes>8080/tcp              k8s_net.f72d85c8_cadvisor-agent.file_19d8274a

If firewall rules have not been set up for the project do that now so you can access the jenkins admin web interface via the external Ip address on port 8080 (see above).

Method 3 : Using a Container-optimized Google Compute Engine image without logging onto instance

Create a yaml manifest file. This is the equivalent of a Dockerfile so will pull down any images and run any commands. In my example the containers.yaml file contains:

   version: v1beta2
     - name: jenkins-demo
       image: jenkins:latest
          - name: allow-http-8080
            hostPort: 8080
            containerPort: 8080

Then deploy a container optimised image passing the manifest

gcloud compute instances create jenkins-instance  --image container-vm-v20141016  --image-project google-containers  --metadata-from-file google-container-manifest=containers.yaml  --zone us-central1-a  --machine-type f1-micro

If firewall rules have not been set up for the project do that now so you can access the jenkins admin web interface via the external Ip address on port 8080 (see above).