Categories
ITOps

Transitioning from standard CA to LetEncrypt!

With the go-live of https://letsencrypt.org/ its time to transition from the pricy and manual standard SSL cert issuing model to a fully automated process using the ACME protocol. Most orgs have numerous usages of CA purchased certs, this post will cover hosts running apache/nginx and AWS ELBs, all of these usages are to be replaced with automated provisioning and renewal of letsencrypt signed certs.

Provisioning and auto-renewing Apache and nginx TLS/SSL certs

For externally accessible sites where Apache/Nginx handles TLS/SSL termination moving to letsencrypt is quick and simple:

1 – Install the letsencrypt client software (there are RHEL and Centos rpms – so thats as simple as adding the package to puppet policies or

yum install letsencrypt

2 – Provision the keys and certificates for each of the required virtual hosts. If a virtual host has aliases, specify multiple names with the -d arg.

letsencrypt certonly --webroot -w /var/www/sites/static -d static.mwclearning.com -d img.mwclearning.com

This will provision a key and certificate + chain to the letsencrypt home directory (defaults /etc/letsencrypt). The /etc/letsencrypt/live directory contains symlinks to the current keys and certs.

3 – Update the apache/nginx virtualhost configs to use the symlinks maintained by the letsencrypt client, ie:

# static Web Site

	ServerName static.mwclearning.com
	ServerAlias img.mwclearning.com
	ServerAlias registry.mwclearning.ninja # <<-- dummy alias for internal site
	ServerAdmin webmaster@mwclearning.ninja

	DocumentRoot /var/www/sites/static
	DirectoryIndex index.php index.html
        
		AllowOverride all
		Options +Indexes
        
	ErrorLog /var/log/httpd/static_error.log
	LogLevel warn
	CustomLog /var/log/httpd/static_access.log combined



	ServerName static.mwclearning.com
	ServerAlias img.mwclearning.com
	ServerAlias img.mwclearning.ninja
	ServerAdmin webmaster@mchost

	DocumentRoot /var/www/sites/static
	DirectoryIndex index.php index.html
        
		AllowOverride all
		Options +Indexes	
        
	ErrorLog /var/log/httpd/static_ssl_error.log
	LogLevel warn
	CustomLog /var/log/httpd/static_ssl_access.log combined

	SSLEngine on
	SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!RC4
	SSLHonorCipherOrder on
	SSLInsecureRenegotiation off
	SSLCertificateKeyFile /etc/letsencrypt/live/static.mwclearning.com/privkey.pem
	SSLCertificateFile /etc/letsencrypt/live/static.mwclearning.com/cert.pem
	SSLCertificateChainFile /etc/letsencrypt/live/static.mwclearning.com/chain.pem

4 – Create a script for renewing these certs, something like:

#!/bin/bash
# Vars
PROG_ECHO=$(which echo)
PROG_LETSENCRYPT=$(which letsencrypt)
PROG_FIND=$(which find)
PROG_OPENSSL=$(which openssl)

#
# Main
#
${PROG_ECHO} "Current expiries: "
for x in $(${PROG_FIND} /etc/letsencrypt/live/ -name cert.pem); do ${PROG_ECHO} "$x: $(${PROG_OPENSSL} x509 -noout -enddate -in $x)";done
${PROG_ECHO} "running letsencrypt certonly --webroot .. on $(hostname)"
${PROG_LETSENCRYPT} renew --agree-tos
LE_STATUS=$?
systemctl restart httpd
if [ "$LE_STATUS" != 0 ]; then
    ${PROG_ECHO} Automated renewal failed:
    cat /var/log/letsencrypt/renew.log
    exit 1
else
    ${PROG_ECHO} "New expiries: "
    for x in $(${PROG_FIND} /etc/letsencrypt/live/ -name cert.pem); do echo "$x: $(${PROG_OPENSSL} x509 -noout -enddate -in $x)";done
fi
# EOF

5 – Run this script automatically everyday with cron or jenkins

6 – Monitoring the results of the script and externally monitor the expiry dates of your certificates (something will go wrong one day)

Provisioning and auto-renewing AWS Elastice Load Balancer TLS/SSL certs

This has been made very easy by Alex Gaynor with a handy python script: https://github.com/alex/letsencrypt-aws. This is a great use-case for docker and Alex has created a docker image for the script: https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. To use this with ease I created a layer on top creating a new Dockerfile:

#
# mwc letsencrypt-aws image
#
FROM alexgaynor/letsencrypt-aws:latest

MAINTAINER Mark

ENV LETSENCRYPT_AWS_CONFIG="{\"domains\": \
[{\"elb\":{\"name\":\"TestExtLB\",\"port\":\"443\"}, \
\"hosts\":[\"test.mwc.com\",\"test-app.mwc.com\",\"test-api.mwc.com\"], \
\"key_type\":\"rsa\"}, \
{\"elb\":{\"name\":\"ProdExtLb\",\"port\":\"443\"}, \
\"hosts\":[\"app.mwc.com\",\"show.mwc.com\",\"show-app.mwc.com\", \
\"app-api.mwc.com\",\"show-api.mwc.com\"], \
\"key_type\":\"rsa\"}], \
\"acme_account_key\":\"s3://config-bucket-abc123/config_items/private_key.pem\"}"

ENV AWS_ACCESS_KEY_ID="<AWS_ACCESS_KEY_ID>"
ENV AWS_SECRET_ACCESS_KEY="<AWS_SECRET_ACCESS_KEY>"
ENV AWS_DEFAULT_REGION="ap-southeast-2"
# EOF

The explanation of these values can be found at https://hub.docker.com/r/alexgaynor/letsencrypt-aws/. Its quite important to create a specific IAM User to conduct the required Route53/S3 and ELB actions. This images need to be build on changes:

sudo docker build -t registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws .
sudo docker push registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws

With this image built another cron or jenkins job can be run daily executing something like:

sudo docker pull registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws
sudo docker run registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws
sleep 10
sudo docker rm $(sudo docker ps -a | grep registry.mwc.ninja:5000/syseng/ao-letsencrypt-aws | awk '{print $1}')

Again, the job must be monitored along with external monitoring of certificates. See a complete SSL checker at https://github.com/markz0r/tools/tree/master/ssl_check_complete.

Categories
Random

Download all Evernote attachments via Evernote API with Python

Python script for downloading snapshots of all attachments in all of your Evernote notebooks.

#!/usr/bin/python
import json, os, pickle, httplib2, io
import evernote.edam.userstore.constants as UserStoreConstants
import evernote.edam.type.ttypes as Types
from evernote.api.client import EvernoteClient
from evernote.edam.notestore.ttypes import NoteFilter, NotesMetadataResultSpec
from datetime import date

# Pre-reqs: pip install evernote 
# API key from https://dev.evernote.com/#apikey

os.environ["PYTHONPATH"] = "/Library/Python/2.7/site-packages"

CREDENTIALS_FILE=".evernote_creds.json"
LOCAL_TOKEN=".evernote_token.pkl"
OUTPUT_DIR=str(date.today())+"_evernote_backup"

def prepDest():
    if not os.path.exists(OUTPUT_DIR):
        os.makedirs(OUTPUT_DIR)
        return True
    return True

# Helper function to turn query string parameters into a 
# source: https://gist.github.com/inkedmn
def parse_query_string(authorize_url):
    uargs = authorize_url.split('?')
    vals = {}
    if len(uargs) == 1:
        raise Exception('Invalid Authorization URL')
    for pair in uargs[1].split('&'):
        key, value = pair.split('=', 1)
        vals[key] = value
    return vals

class AuthToken(object):
    def __init__(self, token_list):
        self.oauth_token_list = token_list

def authenticate():
    def storeToken(auth_token):
        with open(LOCAL_TOKEN, 'wb') as output:
            pickle.dump(auth_token, output, pickle.HIGHEST_PROTOCOL)    

    def oauthFlow():
        with open(CREDENTIALS_FILE) as data_file:    
            data = json.load(data_file)
            client = EvernoteClient(
                consumer_key = data.get('consumer_key'),
                consumer_secret = data.get('consumer_secret'),
                sandbox=False
            )
        request_token = client.get_request_token('https://assetowl.com')
        print(request_token)
        print("Token expired, load in browser: " + client.get_authorize_url(request_token))
        print "Paste the URL after login here:"
        authurl = raw_input()
        vals = parse_query_string(authurl)
        auth_token=client.get_access_token(request_token['oauth_token'],request_token['oauth_token_secret'],vals['oauth_verifier'])
        storeToken(AuthToken(auth_token))
        return auth_token

    def storeToken(auth_token):
        with open(LOCAL_TOKEN, 'wb') as output:
            pickle.dump(auth_token, output, pickle.HIGHEST_PROTOCOL)    

    def getToken():
        store_token=""
        if os.path.isfile(LOCAL_TOKEN):
            with open(LOCAL_TOKEN, 'rb') as input:
              clientt = pickle.load(input)
            store_token=clientt.oauth_token_list
        return store_token

    try:
        client = EvernoteClient(token=getToken(),sandbox=False)
        userStore = client.get_user_store()
        user = userStore.getUser()
    except Exception as e:
        print(e)
        client = EvernoteClient(token=oauthFlow(),sandbox=False)
    return client

def listNotes(client):
    note_list=[]
    note_store = client.get_note_store()
    filter = NoteFilter()    
    filter.ascending = False
    spec = NotesMetadataResultSpec(includeTitle=True)
    spec.includeTitle = True
    notes = note_store.findNotesMetadata(client.token, filter, 0, 100, spec)
    for note in notes.notes:
        for resource in note_store.getNote(client.token, note.guid, False, False, True, False).resources:
            note_list.append([resource.attributes.fileName, resource.guid])
    return note_list


def downloadResources(web_prefix, res_array):
    for res in res_array:
        res_url = "%sres/%s" % (web_prefix, res[1])
        print("Downloading: " + res_url + " to " + OUTPUT_DIR + res[0])
        h = httplib2.Http(".cache")
        (resp_headers, content) = h.request(res_url, "POST",
                                        headers={'auth': DEV_TOKEN})
        with open(os.path.join(OUTPUT_DIR, res[0]), "wb") as wer:
            wer.write(content)

def main():
    if prepDest():
        client = authenticate()
        user_store=client.get_user_store()
        web_prefix = user_store.getPublicUserInfo(user_store.getUser().username).webApiUrlPrefix
        downloadResources(web_prefix, listNotes(client))

if __name__ == '__main__':
    main()

Categories
Random

Downloading Google Drive with Python via Drive API

Python script for downloading snapshots of all file in your google drive, including those shared with you.

source: https://github.com/SecurityShift/tools/blob/master/backup_scripts/google_drive_backup.py



		
Categories
Intro to DevOps Online Courses

Intro to DevOps – Lesson 3

Topics:

  • Continuous integration/Delivery (Jenkins)
    • Automate from commits to repo to build to test to deploy
  • Monitoring (Graphite)

At minimum there will be 6 environments

  1. local (dev workstations)
  2. dev (sandbox)
  3. integration (test build and side effects)
  4. test (UAT, Performance, QA may be many environments)
  5. staging (live data? – replication of production)
  6. production

From coding to prod

If the hand over of each of these steps was manuals there are too many opportunities for delays and errors. Sooo:

  • Continuous integration
    • Maintain a code repository (git)
    • Automate the build (Jenkins, TravisCI, CircleCI)
    • Test the build (Jenkins, TravisCI, CircleCI)
    • Commit changes often (manual)
    • Build each commit (Jenkins)
    • Fix bugs immediately (manual)
    • Test in a clone environment (test suites)

Some practical work setting up Jenkins…

 

Categories
Intro to DevOps Online Courses

Intro to DevOps – Lesson 2

Topics:

  • How to get Dev and ITOps working together
  • Looking at some tools to enable that integration

Started of looking at the basic conflict between ITOps and Dev. In essence the risk adversity of ITOps, why its there and why its good and bad. First hand experience of this with being woken up many nights after a ‘great’ new release makes this quite pertinent.

  • ITOps needs to run systems that are tested and tightly controlled – This means that when a release is coming that requires new or significantly changed components ITOps needs to be included in discussing and aware so they can ensure stability in Production
  • Dev needs to adopts, trial and use new technologies to create software solutions that meet user and business requirements in a effective manner
  • Performance testing needs to be conducted throughout the development iterations and this is impossible if the development environments are significantly different to production

DevOps-ReleaseFixes
These improvements would make remove most of the real world issues we experience when conducting release deployments.

Performance tests:

How can performance tests be conducted throughout the development of new releases, particularly if these release become more regular?

Proposed answer 1 – is a ‘Golden Image’ – A set image that is used for developing, testing and operating services. This includes Apps, Libs and OS. Docker makes this more practical with containers.

Proposed answer 2 – is to apply configuration management to all machines (not sure how practical this could be).

Practical lab:

Installed Virtualbox, Vagrant, git, ssh, Packer.

Vagrant configures VMs, Packer enabled building of ‘golden images’.

Packer template Variables:

  • Builders take source image.
  • Provisions install and configure software within running machine (shell, chef and puppet scripts).
  • Post processors conduct tasks on images output by builders ie, compress (https://www.packer.io/docs/templates/post-processors.html). These post processors can create VMs for AWS, DigitalOcean, Hyper-V, Parallels, QEMU, VirtualBox, VMware

Lab instructions were pretty straight forward. On to lesson 3.

Categories
ITOps

Monolithic to Microservices

Review of ‘AWS re:Invent 2015 | (ARC309) Microservices: Evolving Architecture Patterns in the Cloud’ presentation.

From monolithic to microservices, on AWS.

Screenshot 2015-11-07 22.15.36

Ruby on rails -> Java based functional services.

Screenshot 2015-11-07 22.16.52

Java based functional services resulted in requirement for everything to be loaded into memory – 20 minutes to start services. Still very large services –  with many engineers working on it. This means that commits to those repos take a long time to QA. So a commit is made then start working on something else then a week or two later have to fix what you hardly remember. Then to get specific information out of those big java apps it was required to parse the entire homepage. So…

Screenshot 2015-11-07 22.21.53

Big sql database is still there – This will mean changes with schema change are difficult to do without outages. Included in this stage of micro services was:

  • Team autonomy – give teams a problem and they  can build whatever services they need
  • Voluntary adoption – tools/techniques/processes
  • Goal driven initiatives
  • Failing fast and openly

What are the services? – Email, shipping cost, recommendations, admin, search

Anatomy of a service:

Screenshot 2015-11-07 22.30.51

A service has its own datastore and it completely owns its own datastore. This is where dependency on one big schema is avoided. Services at gilt.com average 2000 lines of code and 32 source files.

Service discovery – enormously simple? – Discovery is a client needs to get to a services, how is it going to get there. ‘It has the name of the service – look up that URL’.

Screenshot 2015-11-07 22.41.14

Use ZooKeeper as a highly available store.

Moving all this to the cloud via ‘lift and shift’ with AWS Direct Connect. In AWS all services where their own ec2 instance and dockerized.

Being a good citizen in a microservices organisation:

  • Service Consumer
    • Design for failure
    • Expect to be throttled
    • Retry with exponential backoff
    • Degrade gracefully
    • Cache when appropriate
  • Service Provider
    • Publish your metrics
      • Throughput, error rate, latency
    • Protect yourself (throttling)
    • Implementation details private
    • Maintain backwards compatability
    • See Amazon API gateway

Again service discovery – simple with naming conventions, DNS and load balancers. Avoid DNS issues with dynamic service registry (ZooKeeper, Eureka, Consul, SmartStack).

Data management – moving away from the schema change problems. And the other problems (custom stored procedures, being stuck with one supplier, single point of failure). Microservices must include decentralisation of datastores, services own their datastores and do not share them. This has a number of benefit from being able to chose whatever data store technology best meets the service needs, make changes without affecting other services, scale the data stores independently. So how do we ensure transactional integrity? Distributed locking sounds horrible. Do all services really need strict transactional integrity? Use queues to retry later.

Aggregation of data – I need to do analytics on my data? AWS Kenesis firehose, Amazon SQS, custom feed.

Continuous Delivery/Deployment  – Introduced some AWS services Code Deploy — or just use Jenkins.

How many services per container/instance – Problems with monitoring granularity, scaling is less granular, ownership is less atomic, continuos deployment is tougher with immutable containers.

I/O Explosion –  Mapping dependencies between services is tough. Some will be popular/hotspots. Service consumers need to cache were they can. Dependency injection is also an option – You can only make a request from services A if you have the required data from service B and C in your request.

Monitoring –  Logs are key, also tracing request through fanned dependency can be much easier with a requirement for a header that is passed on.

Unique failures – Watched a day in the life of a Netflix engineers… good points on failures. We accept and increased failure rate to maintain velocity. We just want to ensure failures are unique. For this to happen we need to have open and safe feedback.

 

Categories
Intro to DevOps

Intro to DevOps – Lesson 1

With all of the emerging technology solutions and paradigms emerging in the IT space, it can be difficult to get a full understanding of everything; particularly before developing biases. So… from the perspective of an infosec and ops guy I will list out some notes on my own review of current direction of devops. This review is based primarily on Udacity – Intro to DevOps and assorted blogs.

Why do it?

Reduce wastage in software development and operation workflows. Simply, more value, less pain.

What is is?

Most of the definitions out there boil down to communication and collaboration between Developers, QA and IT Ops throughout all stages of the development lifecycle.

  • No more passing the release from Dev to IT Ops
  • No more clear boundaries between Dev and IT Ops people/environments/processes and tools
  • No more inconsistency between Dev and Prod environments
  • No more deciding who’s problem bugs are
  • No more 7 day release deployments
  • No more separate tool sets

DevOps-L1_1

Agile development + Continuous Monitoring + Delivery + Automation + Feedback loops =  DevOps?

  • Create shared view on goals, responsibilities, priorities and benefits
  • Learn from failures (feedback mechanisms include devs and operators)
  • Reduce risk and size of changes
  • Drive automation
  • Drive feedback loops
  • Validate ideas as quickly and cheaply (cost + risk) as possible

What DevOps is not:

  • Developers overtaking operations
  • Just tools (though it really is enabled and perhaps dependent on tools)

How do you apply it?

CAMS – Culture, Automation, Measurement and Sharing

  • Culture -> Agile like (People>Process>Tools) + Lean (don’t do what’s not valuable)
  • Automation -> Deployment, Unit Testing, CI -> These come together with DevOps?
  • Measurement -> Infrastructure, usage, release, performance, business metrics, processes, trends
  • Sharing -> Without the functional separation, feedback loops are tighter; particularly between code and operate

What technologies enable it?

Coming in next lesson – good resource for tools – stackshare.io

Other thoughts

Not much so far – looking forward to testing some tools. Particularly how patching and vulnerability management can be applied to docker images.

 

 

Categories
Random

Configuring Snort Rules

Some reading before starting:

Before setting out, getting some basic concepts about snort is important.

This deployment with be in Network Intrusion Detection System (NIDS) mode – which performs detection and analysis on traffic. See other options and nice and concise introduction:  http://manual.snort.org/node3.html.

Rule application order: activation->dynamic->pass->drop->sdrop->reject->alert->log

Again drawing from the snort manual some basic understanding of snort alerts can be found:

    [**] [116:56:1] (snort_decoder): T/TCP Detected [**]

116 –  Generator ID, tells us what component of snort generated the alert

Eliminating false positives

After running pulled pork and using the default snort.conf there will likely be a lot of false positives. Most of these will come from the preprocessor rules. To eliminate false positives there are a few options, to retain maintainability of the rulesets and the ability to use pulled pork, do not edit rule files directly. I use the following steps:

  1. Create an alternate startup configuration for snort and barnyard2 without -D (daemon) and barnyard2 config that only writes to stdout, not the database. – Now we can stop and start snort and barnyard2 quickly to test our rule changes.
  2. Open up the relevant documentation, especially for preprocessor tuning – see the ‘doc’ directory in the snort source.
  3. Have some scripts/traffic replays ready with traffic/attacks you need to be alerting on
  4. Iterate through reading the doc, making changes to snort.conf(for preprocessor config), adding exceptions/suppressions to snort’s threshold.conf or PulledPork’s disablesid, dropsid, enablesid, modifysid confs for pulled pork and running the IDS to check for false positives.

If there are multiple operating systems in your environment, for best results define ipvars to isolate the different OSs. This will ensure you can eliminate false positives whilst maintaining a tight alerting policy.

HttpInspect

From doc: HttpInspect is a generic HTTP decoder for user applications. Given a data buffer, HttpInspect will decode the buffer,  find HTTP fields, and normalize the fields. HttpInspect works on both client requests and server responses.

Global config –

Custom rules

Writing custom rules using snorts lightweight rules description language enables snort to be used for tasks beyond intrusion detection. This example will look at writing a rule to detect Internet Explorer 6 user agents connecting to port 443.

Rule Headers -> [Rule Actions, Protocols, IP Addresses and ports, Direction Operator,

Rule Options -> [content: blah;msg: blah;nocase;HTTP_header;]

Rule Option categories:

  • general – informational only — msg:, reference:, gid:, sid:, rev:, classtype:, priority:, metadata:
  • payload – look for data inside the packet —
    • content: set rules that search for specific content in the packet payload and trigger a response based on that data (Boyer-Moore pattern match). If there is a match anywhere within the packets payload the remainder of the rule option tests are performed (case sensitive). Can contain mixed text and binary data. Binary data is represented as hexdecimal with pipe separators — (content:”|5c 00|P|00|I|00|P|00|E|00 5c|”;). Multiple content rules can be specified in one rule to reduce false positives. Content has a number of modifiers: [nocase, rawbytes, depth, offset, distance, within, http_client_body, http_cookie, http_raw_cookie, http_header, http_raw_header, http_method, http_uri, http_raw_uri, http_stat_code, http_stat_msg, fast_pattern.
  • non-payload – look for non-payload data
  • post-detection – rule specific triggers that are enacted after a rule has been matched
Categories
Random

Validating certificate chains with openssl

Using openssl to verfiy certificate chains is pretty straight forward – see a full script below.

One thing that confused me for a bit was how to specify trust anchors without importing them to the pki config of the os (I also did not want to accept all of the trust anchors).

So.. here what to do for specif trust anchors

# make a directory and copy in all desired trust anchors
# make sure the certs are in pem format, named <bah>.pem
mkdir ~/trustanchors
# create softlinks with hash 
cd ~/trustanchors
for X in ./*.pem;do ln -s $X ./`openssl x509 -hash -noout -in $X`.0;done

# confirm the trust anchor(s) are working as expected
openssl verify -CApath ~/trustanchors -CAfile <some_intermediate>.pem <my_leaf>.pem

So here’s a simple script that will pull the cert chain from a [domain] [port] and let you know if it is invalid – note there will likely be come bugs from characters being encoded / return carriages missing:

#!/bin/bash

# chain_collector.sh [domain] [port]
# output to stdout
# assumes you have a directory with desired trust anchors at ~/trustanchors

if [ $# -ne 2 ]; then
	echo "USAGE: chain_collector.sh [domain] [port]"
	exit 1
fi

TRUSTANCHOR_DIR="~/trustanchors"
SERVER=$1:$2
TFILE="/tmp/$(basename $0).$$.tmp"
OUTPUT_DIR=$1_$2
mkdir $OUTPUT_DIR

openssl s_client -showcerts -servername $1 -connect $SERVER 2>/dev/null > $TFILE
awk 'BEGIN {c=0;} /BEGIN CERT/{c++} { print > "tmpcert." c ".pem"}' < $TFILE 
i=1 
for X in tmpcert.*.pem; do
    if openssl x509 -noout -in $X 2>/dev/null ; then 
        echo "#############################"
        cn=$(openssl x509 -noout -subject -in $X | sed -e 's#.*CN=\(\)#\1#')
	echo CN: $cn
	cp $X $OUTPUT_DIR/${cn// /_}.$((i-1)).pem 
	cert_expiry_date=$(openssl x509 -noout -enddate -in $X \
			| awk -F= ' /notAfter/ { printf("%s\n",$NF); } ')
	seconds_until_expiry=$(echo "$(date --date="$cert_expiry_date" +%s) \ 
                                     - $(date +%s)" |bc)
        days_until_expiry=$(echo "$seconds_until_expiry/(60*60*24)" |bc)
	echo Days until expiry: $days_until_expiry
	echo $(openssl x509 -noout -text -in $X | \ 
                grep -m1 "Signature Algorithm:" | head)
	echo $(openssl x509 -noout -issuer -in $X)
	if [ -a tmpcert.$i.pem ]; then
		echo Parent: $(openssl x509 -noout -subject \ 
                                   -in tmpcert.$i.pem | sed -e 's#.*CN=\(\)#\1#')
	        echo Parent Valid? $(openssl verify -verbose -CAfile tmpcert.$i.pem $X)	
	else
		echo "Parent Valid? This is the trust anchor"
	fi
	echo "#############################"
    fi
    ((i++))
done
rm -f tmpcert.*.pem $TFILE
Categories
Random

SSL Review part 2

RSA in practice

Initializing SSL/TLS with https://youtube.com

In this example the youtube server is authenticated via it’s certificate and an encrypted communication session established. Taking a packet capture of the process enables simple identification of the TLSv1.1 handshake (as described: http://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake):

Packet capture download: http://mchost/sourcecode/security_notes/youtube_TLSv1.1_handshake_filtered.pcap

The packet capture starts with the TCP three-way handshake – Frames 1-3

With a TCP connection established the TLS handshake begins, Negotiation phase:

  1. ClientHello – Frame 4 – A random number[90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7], Cipher suites, compression methods and session ticket (if reconnecting session).
  2. ServerHello – Frame 6 – chosen protocol version [TLS 1.1], random number [1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78], CipherSuite [TLS_ECDHE_ECDSA_WITH_RC4_128_SHA], Compression method [null], SessionTicket [null]
  3. Server send certificate message (depending on cipher suite)
  4. Server sends ServerHelloDone
  5. Client responds with ClientKeyExchange containing PreMasterSecret, public key or nothing. (depending on cipher suite) – PreMasterSecret is encrypted using the server public key
  6. Client and server use the random numbers and PreMsterSecret to compute a common secret – master secret
  7. Client sends ChangeCipherSpec record
  8. Client sends authenticated and encrypted Finished – contains a hash and MAC of previous handshake message
  9. Server decrypts the hash and MAC to verify
  10. Server sends ChangeCipherSpec
  11. Server sends Finished – with hash and MAC for verification
  12. Application phase – the handshake is now complete, application protocol enable with content type 23

client random: 90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7 = 10447666340000000000

server random: 1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78 = 1988109383203082608

Interestingly the negotiation with youtube.com and chromium browser resulted in Elliptic Curve Cryptography (ECC) Cipher Suitesfor Transport Layer Security (TLS) as the chosen cipher suite.

Note that there is no step mention here for the client to verify then certificate. In the past most browsers would query a certificate revocation list (CRL), though browsers such as chrome now maintain either ignore CRL functionality or use certificate pinning.

Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said. – source: http://arstechnica.com/business/2012/02/google-strips-chrome-of-ssl-revocation-checking/