In this example the youtube server is authenticated via it’s certificate and an encrypted communication session established. Taking a packet capture of the process enables simple identification of the TLSv1.1 handshake (as described: http://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake):
With a TCP connection established the TLS handshake begins, Negotiation phase:
ClientHello – Frame 4 – A random number[90:fd:91:2e:d8:c5:e7:f7:85:3c:dd:f7:6d:f7:80:68:ae:2b:05:8e:03:44:f0:e8:15:22:69:b7], Cipher suites, compression methods and session ticket (if reconnecting session).
ServerHello – Frame 6 – chosen protocol version [TLS 1.1], random number [1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78], CipherSuite [TLS_ECDHE_ECDSA_WITH_RC4_128_SHA], Compression method [null], SessionTicket [null]
Server send certificate message (depending on cipher suite)
Server sends ServerHelloDone
Client responds with ClientKeyExchange containing PreMasterSecret, public key or nothing. (depending on cipher suite) – PreMasterSecret is encrypted using the server public key
Client and server use the random numbers and PreMsterSecret to compute a common secret – master secret
Client sends ChangeCipherSpec record
Client sends authenticated and encrypted Finished – contains a hash and MAC of previous handshake message
Server decrypts the hash and MAC to verify
Server sends ChangeCipherSpec
Server sends Finished – with hash and MAC for verification
Application phase – the handshake is now complete, application protocol enable with content type 23
server random: 1b:97:2e:f3:58:70:d1:70:d1:de:d9:b6:c3:30:94:e0:10:1a:48:1c:cc:d7:4d:a4:b5:f3:f8:78 = 1988109383203082608
Interestingly the negotiation with youtube.com and chromium browser resulted in Elliptic Curve Cryptography (ECC) Cipher Suitesfor Transport Layer Security (TLS) as the chosen cipher suite.
Note that there is no step mention here for the client to verify then certificate. In the past most browsers would query a certificate revocation list (CRL), though browsers such as chrome now maintain either ignore CRL functionality or use certificate pinning.
Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch. The time frame for the Chrome changes to go into effect are “on the order of months,” a Google spokesman said. – source: http://arstechnica.com/business/2012/02/google-strips-chrome-of-ssl-revocation-checking/
Issue caused by having iptables rule/s that track connection state. If the number of connections being tracked exceeds the default nf_conntrack table size [65536] then any additional connections will be dropped. Most likely to occur on machines used for NAT and scanning/discovery tools (such as Nessus and Nmap).
Symptoms: Once the connection table is full any additional connection attempts will be blackholed.
To check the current number of connections being tracked by conntrack:
/sbin/sysctl net.netfilter.nf_conntrack_count
Options for fixing the issue are:
Stop using stateful connection rules in iptables (probably not an option in most cases)
Increase the size of the connection tracking table (also requires increasing the conntrack hash table)
Decreasing timeout values, reducing how long connection attempts are stored (this is particularly relevant for Nessus scanning machines that can be configured to attempt many simultaneous port scans across an IP range)
Making the changes in a persistent fashion RHEL 6 examples:
# 2: Increase number of connections
echo "net.netfilter.nf_conntrack_max = 786432" >> /etc/sysctl.conf
echo "net.netfilter.nf_conntrack_buckets = 196608" >> /etc/sysctl.conf
# Increase number of bucket to change ration from 1:8 to 1:4 (more # memory use but better performance)
echo 'echo "196608" > /sys/module/nf_conntrack/parameters/hashsize' >> /etc/rc.local
# 3: Alter timeout values
# Generic timeout from 10 mins to 1 min
echo "net.netfilter.nf_conntrack_generic_timeout = 60" > /etc/sysctl.conf
# Change unacknowledged timeout to 30 seconds (from 10 mins)
echo "net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 30" > /etc/sysctl.conf
# Change established connection timeout to 1 hour (from 10 days)
echo "net.netfilter.nf_conntrack_tcp_timeout_established = 3600" > /etc/sysctl.conf
These changes will persist on reboot.
To apply changes without reboot run the following:
Many older web applications do not apply headers/tags that are now considered standard information security practices. For example:
Pragma: no-cache
Cache-Control: no-cache
httpOnly and secure flags
Adding these controls can be achieved using ModSecurity without any needs to modify the application code.
In the case where I needed to modify the cookie headers to include these now controls I added the following to core rule set file: modsecurity_crs_16_session_hijacking.conf.
#
# This rule will identify the outbound Set-Cookie SessionID data and capture it in a setsid
#
#addding httpOnly
Header edit Set-Cookie "(?i)^(JSESSIONID=(?:(?!httponly).)+)$" "$1; httpOnly"
Header set Cache-Control "no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "0"
This adds the cookie controls we were after – Depending on your web application you may need to change ‘JSESSIONID’ to the name of the relevant cookie.
You can find the cookie name simply using browser tools such as Chrome’s Developer Tools (hit F12 in chrome). Load the page you want to check cookies for, click on the Resources tab:
After setting the HTTPOnly and Secure flags you can check the effectiveness using the Console table and listing the document cookies… which should now return nothing.
Distinguished name order of openssl may be opposite of ejbca default configuration – http://www.csita.unige.it/software/free/ejbca/ … If so, this ordering must changed in ejbca configuration prior to deploying (can’t be set on a per CA basis)
Have not been able to replicate this issue in testing.
Import existing TinyCA CA
Basic Admin and User operations
Create and end entity profile for server/client entities
Step 2 – Sign CSR using the End Entity which is associated with a CA
Importing existing certificates
EJBCA can create endentities and import their existing certificate one-by-one or in bulk (http://www.ejbca.org/docs/adminguide.html#Importing Certificates). Bulk inserts import all certificates under a single user which may not be desirable. Below is a script to import all certs in a directory one by one under a new endentity which will take the name of the certificate CN.
#!/bin/sh
# for each certificate in the directory
# create and enduserentity
# enduserentity username = certificate CN
# enduserentity token/pwrd = certificate CN
EJBCA_HOME="/usr/share/ejbca"
IMPORT_DIR=$1
CA=$2
ENDENTITYPROFILE=$3
SSLCERTPROFILE=$4
AP="_OTE"
if [ $# -lt 4 ]; then
echo "usage: import_existing_certs.sh "
exit 1
fi
for X in $IMPORT_DIR*.pem
do
echo "######################################################"
echo "Importing: " $X
CN=$(openssl x509 -in $X -noout -text | grep Subject: | sed -n 's/^.*CN=\(.*\),*/\1/p')
echo "CN: " $CN
printf "Running import: %s ca importcert '%s' '%s' '%s' ACTIVE NULL '%s' '%s' '%s'\n" "$EJBCA_HOME/bin/ejbca.sh" "$CN" "$CN" "$CA" "$X" "$ENDENTITYPROFILE" "$SSLCERTPROFILE"
$EJBCA_HOME/bin/ejbca.sh ca importcert "$CN$AP" "$CN$AP" "$CA" ACTIVE null $X $ENDENTITYPROFILE $SSLCERTPROFILE
echo "######################################################"
done
#Generate CRL via command line
# List CAs
/usr/share/ejbca/bin/ejbca.sh CA listcas
# Create new CRLs:
/usr/share/ejbca/bin/ejbca.sh CA createcrl "" -pem
# Export CRL to file
/usr/share/ejbca/bin/ejbca.sh CA getcrl "" -pem .pem
Checking certificate validity/revoke status via OSCP
XSS, CSRF and similar types of web application attacks have overtaken SQL injections as the most commonly seen attacks on the internet (https://info.cenzic.com/2013-Application-Security-Trends-Report.html). A very large number of web application were written and deployed prior to the trend up in likelihood and awareness of XSS attacks. Thus, it is extremely important to have an effective method of testing for XSS vulnerabilities and mitigating them.
Changes to production code bases can be slow, costly and can miss unreported vulnerabilities quite easily. The use of application firewalls such as ModSecurity (https://github.com/SpiderLabs/ModSecurity/) become an increasingly attractive solution when faced with a decision on how to mitigate current and future XSS vulnerabilities.
Mod Security can be embedded with Apache, NGINX and IIS which is relativity straight forward. In cases where alternative web severs are being used ModSecurity can still be a viable option by creating a reverse proxy (using Apache of NGINX).
A default action can be created for a group of rules using the configuration directive “SecDefaultAction”
Using the following SecDefaultAction at the top of rule set that we want enable blocking and transforming on is a blunt method of protection. Redirection can also be used as a method of blocking.
A powerful web application firewall – free software!
Example of a default action to be applied by ruleset (note defaults cascade through the ruleset files):
Using optional rulesets, modsecurity_crs_16_session_hijacking.conf and modsecurity_crs_43_csrf_protection.conf ModSecurity can provide protection against Cross Site Request Forgeries [CSRF]. The @rsub operators can inject a token on every form (and/or other html elements). ModSecurity can store the expected token value as a variable which is compared to the value posted via forms or other html elements. ModSecurity rules can be based on request methods and URIs etc – alongside the ability to chain rules there are a huge number of options for mitigating XSS and CSRF without impacting normal applicatioin usage.
@rsub
Requirements:
SecRuleEngine On
SecRequestBodyAccess On
SecResponseBodyAccess On
## To enable @rsub
SecStreamOutBodyInspection On
SecStreamInBodyInspection On
SecContentInjection On
Injecting unique request id from mod_unique_id into forms:
<LocationMatch ".*\/<directory requiring authentication>\/.*">
# All requests submitted using POST require a token - not the validation of the token can only be completed if that variable is stored from a previous response
SecRule REQUEST_METHOD "^(?:POST)$" "chain,phase:2,id:'1234',t:none,block,msg:'CSRF Attack Detected - Missing CSRF Token when using POST method - ',redirect:/" SecRule &ARGS:token "!@eq 1" "setvar:'tx.msg=%{rule.msg}',setvar:tx.anomaly_score=+%{tx.critical_anomaly_score},setvar:tx.%{rule.id}-WEB_ATTACK/CSRF-%{matched_var_name}=%{matched_var}"
# Check referrer is valid for an authenticated area of the application
SecRule REQUEST_HEADERS:Referer "!@contains <my website>" "block,phase:2,id:'2345',t:none,block,msg:'CSRF Attack Detected - No external referers allowed to internal portal pages',redirect:/"
SecRule REQUEST_URI "@contains confirmUpdate" "chain,phase:2,id:'3456',t:none,block,msg:'CSRF Attack Detected - Missing CSRF Token. Confirmation button - ',redirect:/" SecRule &ARGS:rv_token "!@eq 1" "setvar:'tx.msg=%{rule.msg}',setvar:tx.anomaly_score=+%{tx.critical_anomaly_score},setvar:tx.%{rule.id}-WEB_ATTACK/CSRF-%{matched_var_name}=%{matched_var}"
</LocationMatch>
Pros:
Wide capabilities for logging, alerts, blocking, redirecting, transforming
Parses everything coming into your web server over HTTP
Virtual patching – if a vulnerability is made public that affects your web application you can write and deploy a rule to mitigate the vulnerability much faster than re-release of application code patched
Extended uses – the capabilities of ModSecurity can be applied to applications outside the scope of application security
Cons:
Added complexity to your application delivery chain – another point for maintenance and failure
Performance costs? – Though I have not had the opportunity to test the performance costs holding session information in memory and inspecting every byte of HTTP traffic can’t be free from performance cost
Hardware costs – Particularly if using ModSecurity’s BodyAccess and BodyInspection features, memory usage will be significant
Improving deployments:
Starting off being aggressive on warnings and very light on action is a necessity to ensure no impact on normal application usage
From this point rules and actions need to be refined
Understanding how the applications works allows the use of ModSecuirtys header and body inspection in effective ways
Some other notes extracted from the ModSecurity Handbook – If you decide to use ModSecurity I strongly recommend buying the handbook. It is not expensive and saves a lot of time.
### STRING MATCHING OPERATORS ###
@beginsWith Input begins with parameter
@contains Input contains parameter
@endsWith Input ends with parameter
@rsub Manipulation of request and response bodies
@rx Regular pattern match in input
@pm Parallel pattern matching
@pmFromFile (also @pmf as of 2.6) Parallel patterns matching, with patterns read from a file
@streq Input equal to parameter
@within Parameter contains input
### NUMBER MATCHING OPERATORS ###
@eq Equal
@ge Greater or equal
@gt Greater than
@le Less or equal
@lt Less than
### ACTIONS ###
# DISRUPTIVE
allow Stop processing of one or more remaining phases
block Indicate that a rule wants to block
deny Block transaction with an error page
drop Close network connection
pass Do not block, go to the next rule
pause Pause for a period of time, then execute allow.
proxy Proxy request to a backend web server
redirect Redirect request to some other web server
# FLOW
chain Connect two or more rules into a single logical rule
skip Skip over one or more rules that follow
skipAfter Skip after the rule or marker with the provided ID
Most of us use and rely on SSL everyday. The mathematical workings of the RSA [Rivest, Shamir, Adleman] algorithm are not overly complex but mapping everything back to what happens in reality requires detailed understanding. Skipping over the need for SSL (for confidential and authenticated exchange of a symmetric key over and insecure medium) I will review the mathematical workings then how they are applied in real world examples.
There are also details in previous posts – RSA1, RSA2
RSA decrypt -> ciph ^ d mod n = 3095021178047041558314072884014000324030086129008597834642883051983162360819331 ^ 944402082567056818708092537028397604145319798848072425038015030084640082599681 mod
When Alice encrypts using Bob’s public key (e) along with the key modulus (n) the output is a protected cipher.
An eavesdropper does not know the private key so decryption is very difficult:
Attacker must solve:
(unknown val, x) ^ e mod n = ciph
x ^65537 mod 4052729130775091849638047446256554071699019514021047339267026030072286291982163 = 3095021178047041558314072884014000324030086129008597834642883051983162360819331
OR, easier – try to determine the private key:
The attacker knows e and n (which = pq). When we created the private key (step 6 above) we conducted: e–1 mod φ(n) – Modular multiplicative inverse which is relatively fast for us to calculate.
The attacked does not know e–1 mod φ(n) though. φ(n) = (p – 1)(q – 1). The attacker knows that n is a composite prime = pq (where p and q are both primes).
So… if the attacker can solve p * q = n (where they know n) then RSA is insecure.
Thankfully the process of Integer factorization is so much harder than the process of creating p,q,n, φ(n), e and d thatonline business and confidentiality can be maintained to acceptable levels.
Threats to RSA
It would be extremely valuable to malicious individuals/groups and (more importantly) intelligence organizations make large integer factorization efficient enough to break RSA.
Coming back to R after closing, a session can be restored by simply running R in the workspace directory.
A history file can be specified via:
# recall your command history
loadhistory(file="myfile") # default is ".Rhistory"
RData can also be saved and loaded via:
# save the workspace to the file .RData in the cwd
save.image()
# save specific objects to a file
# if you don't specify the path, the cwd is assumed
save(object list,file="myfile.RData")
# load a workspace into the current session
# if you don't specify the path, the cwd is assumed
load("myfile.RData")
Describing data:
# show data files attached
ls()
# show dimensions of a data object 'd'
dim(d)
#show structure of data object 'd'
str(d)
#summary of data 'd'
summary(d)
ggplot(d, aes(x = write)) + geom_histogram()
# Or kernel density plots
ggplot(d, aes(x = write)) + geom_density()
# Or boxplots showing the median, lower and upper quartiles and the full range
ggplot(d, aes(x = 1, y = math)) + geom_boxplot()
Lets look at some more ways to understand the data set:
# density plots by program type
ggplot(d, aes(x = write)) + geom_density() + facet_wrap(~prog)
# box plot of math scores for each teaching program
ggplot(d, aes(x = factor(prog), y = math)) + geom_boxplot()
Extending visualizations:
ggplot(melt(d[, 7:11]), aes(x = variable, y = value)) + geom_boxplot()
# break down by program:
ggplot(melt(d[, 6:11], id.vars = "prog"), aes(x = variable, y = value, fill = factor(prog))) + geom_boxplot()
Analysis of categories can be conducted with frequency tables:
xtabs(~female, data = d)
xtabs(~race, data = d)
xtabs(~prog, data = d)
xtabs(~ses + schtyp, data = d)
Finally lets have a look at some bivatiate (pairwise) correlations. If ther is no missing data, cor function can be users, else use can remove items:
This is a valid question considering that most languages/frameworks, including CUDA have statistical analysis libraries built in. Hopefully running through some introductory exercises will reveal the benefits.
If converting excel spreadsheets to CSV is too much of a hassle the xlxs package we imported will do the job:
# these two steps only needed to read excel files from the internet
f <- tempfile("hsb2", fileext=".xls")
download.file("http://www.ats.ucla.edu/stat/data/hsb2.xls", f, mode="wb")
dat.xls <- read.xlsx(f, sheetIndex=1)
Viewing Data:
# first few rows
head(dat.csv)
# last few rows
tail(dat.csv)
# variable names
colnames(dat.csv)
# pop-up view of entire data set (uncomment to run)
View(dat.csv)
Datasets that have been read in are stored as data frames which have a matrix structure. The most common method of indexing is object[row,column] but many others are available.
# single cell value
dat.csv[2, 3]
# omitting row value implies all rows; here all rows in column 3
dat.csv[, 3]
# omitting column values implies all columns; here all columns in row 2
dat.csv[2, ]
# can also use ranges - rows 2 and 3, columns 2 and 3
dat.csv[2:3, 2:3]
Variables can also be accessed via their names:
# get first 10 rows of variable female using two methods
dat.csv[1:10, "female"]
dat.csv$female[1:10]
The c function is used to combine values of common type together to form a vector:
# get column 1 for rows 1, 3 and 5
dat.csv[c(1, 3, 5), 1]
## [1] 70 86 172
# get row 1 values for variables female, prog and socst
dat.csv[1, c("female", "prog", "socst")]
## female prog socst
## 1 0 1 57
Creating colnames:
colnames(dat.csv) <- c("ID", "Sex", "Ethnicity", "SES", "SchoolType", "Program",
"Reading", "Writing", "Math", "Science", "SocialStudies")
# to change one variable name, just use indexing
colnames(dat.csv)[1] <- "ID2"
Saving data:
#write.csv(dat.csv, file = "path/to/save/filename.csv")
#write.table(dat.csv, file = "path/to/save/filename.txt", sep = "\t", na=".")
#write.dta(dat.csv, file = "path/to/save/filename.dta")
#write.xlsx(dat.csv, file = "path/to/save/filename.xlsx", sheetName="hsb2")
# save to binary R format (can save multiple datasets and R objects)
#save(dat.csv, dat.dta, dat.spss, dat.txt, file = "path/to/save/filename.RData")
#change workspace directory
setwd("/home/a/Desktop/R/testspace1")
Usage: sh GetPlaylist.sh [youtube playlist URL] [output_directory]
Example:
sh ./GetYoutubePlaylist.sh http://www.youtube.com/playlist?list=PL702CAF4AD2AED35B ~/Music
To get the youtube playlist URL view the playlist by clicking on its name, no videos will be playing, then click on share and the url will be highlighted: