Yegor's blog

Small blog about system administration.

HOWTO: setup MySQL with SSL, SSL replication and how to establish secure connections from the console

Setup SSL on MySQL

1. Generate SSL certificates. Use the different Common Name for server and client certificates.
2. For the reference, I store the generated files under /etc/mysql-ssl/
3. Add the following lines to /etc/my.cnf under [mysqld] section:

# SSL
ssl-ca=/etc/mysql-ssl/ca-cert.pem
ssl-cert=/etc/mysql-ssl/server-cert.pem
ssl-key=/etc/mysql-ssl/server-key.pem



4. Restart MySQL.
5. Create an user to permit only SSL-encrypted connection:
GRANT ALL PRIVILEGES ON *.* TO ‘ssluser’@’%’ IDENTIFIED BY ‘pass’ REQUIRE SSL;

Establish secure connection from console

1. If the client is on a different node, copy /etc/mysql-ssl/ from the server to that node.
2. Add the following lines to /etc/my.cnf under [client]:

# SSL
ssl-cert=/etc/mysql-ssl/client-cert.pem
ssl-key=/etc/mysql-ssl/client-key.pem



3. Test a secure connection:
[root@centos6 ~]# mysql -u ssluser -p -sss -e ‘\s’ | grep SSL
SSL: Cipher in use is DHE-RSA-AES256-SHA

Setup SSL replication

1. Establish a secure connection from the console on slave like described above, to make sure SSL works fine.
2. On Master add “REQUIRE SSL” to the replication user:
GRANT REPLICATION SLAVE ON *.* to ‘repl’@’%’ REQUIRE SSL;
3. Change master options and restart slave:
STOP SLAVE;
CHANGE MASTER MASTER_SSL=1,
MASTER_SSL_CA=’/etc/mysql-ssl/ca-cert.pem’,
MASTER_SSL_CERT=’/etc/mysql-ssl/client-cert.pem’,
MASTER_SSL_KEY=’/etc/mysql-ssl/client-key.pem';
SHOW SLAVE STATUSG
START SLAVE;
SHOW SLAVE STATUSG

Establish secure connection from PHP

1. Install php and php-mysql packages. I use the version >=5.4.x, otherwise, it may not work.
2. Create the script:
[root@centos6 ~]# cat mysqli-ssl.php
$conn=mysqli_init();
mysqli_ssl_set($conn, ‘/etc/mysql-ssl/client-key.pem’, ‘/etc/mysql-ssl/client-cert.pem’, NULL, NULL, NULL);
if (!mysqli_real_connect($conn, ‘127.0.0.1’, ‘ssluser’, ‘pass’)) { die(); }
$res = mysqli_query($conn, ‘SHOW STATUS like “Ssl_cipher”‘);
print_r(mysqli_fetch_row($res));
mysqli_close($conn);
3. Test it:
[root@centos6 ~]# php mysqli-ssl.php
Array
(
[0] => Ssl_cipher
[1] => DHE-RSA-AES256-SHA
)

No comments :

Post a Comment

HOWTO: Configure Logging and Log Rotation in Nginx

One of the easiest ways to save yourself trouble with your web server is to configure appropriate logging today. Logging information on your server gives you access to the data that will help you troubleshoot and assess situations as they arise.


The Error_log Directive

Nginx uses a few different directives to control system logging. The one included in the core module is called "error_log".

Error_log Syntax

The "error_log" directive is used to handle logging general error messages. If you are coming from Apache, this is very similar to Apache's "ErrorLog" directive.
The error_log directive takes the following syntax:
error_log log_file [ log_level ]
The "log_file" in the example specifies the file where the logs will be written. The "log_level" specifies the lowest level of logging that you would like to record.

The Access_log Directive

The access_log directive uses some similar syntax to the error_log directive, but is more flexible. It is used to configure custom logging.
The access_log directive uses the following syntax:
access_log /path/to/log/location [ format_of_log buffer_size ];
The default value for access_log is the "combined" format we saw in the log_format section. You can use any format defined by a log_format definition.
The buffer size is the maximum size of data that Nginx will hold before writing it all to the log. You can also specify compression of the log file by adding "gzip" into the definition:
access_log location format gzip;
Unlike the error_log directive, if you do not want logging, you can turn it off by specifying:
access_log off;
It is not necessary to write to "/dev/null" in this case.

Log Rotation

As log files grow, it becomes necessary to manage the logging mechanisms to avoid filling up disk space. Log rotation is the process of switching out log files and possibly archiving old files for a set amount of time.
Nginx does not provide tools to manage log files, but it does include mechanisms that make log rotation simple.
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})") {
    set $year $1;
    set $month $2;
    set $day $3;
}
access_log /var/log/nginx/$year-$month-$day-access.log;

Conclusion

Proper log configuration and management can save you time and energy in the event of a problem with your server. Having easy access to the information that will help you diagnose a problem can be the difference between a trivial fix and a persistent headache.
It is important to keep an eye on server logs in order to maintain a functional site and ensure that you are not exposing sensitive information. This guide should serve only as an introduction to your experience with logging.

No comments :

Post a Comment

Ubuntu/Debian - Encrypted incremental backups with duplicity on Amazon S3

An example on how to use duplicity to perform encrypted incremental backups on Amazon S3.

Getting started

If you've never heard about duplicity before, you should check the documentation.

Install duplicity

First, you need to install duplicity, I always install it from source since the duplicity package is not often updated.
$ sudo apt-get install python-dev librsync-dev
$ cd /opt
$ sudo wget https://code.launchpad.net/duplicity/0.6-series/0.6.20/+download/duplicity-0.6.20.tar.gz
$ sudo tar xvzf duplicity-0.6.20.tar.gz
$ cd duplicity-0.6.20
$ python sudo setup.py install
But you can install it with apt-get
$ sudo apt-get install duplicity
Next you can also install s3cmd from S3 Tools, it's a command line tool for managing your S3 buckets, but it's not required.
$ sudo apt-get install s3cmd
$ s3cmd --configure

Encrypted Backups

Before backing up the data, you need to think about encryption, duplicity makes use of gpg and handles both private/public key pair (a gpg key) and symmetric encryption (a passphrase).
I use passsphrases since I'll never lose it and I don't have to backup a gpg key.

My backup script

Since you need to specify many args to perform the differents actions, I crafted a bash script that make working with duplicity easier, duptools.

Features

  • Backup multiple directories
  • Send email report on backup
  • Quickly list file and show bucket status
  • Restore your files easily

Duptools

#!/bin/bash
export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY
export PASSPHRASE=YOU_PASSHRASE

# directories, space separated
SOURCE="/home/yegorg/backup /home/yegorg/bin /home/yegorg/documents"
BUCKET=s3+http://mybucket
LOGFILE=/home/yegorg/tmp/duplicity.log
# set email to receive a backup report
EMAIL=""

backup() {
  INCLUDE=""
  for CDIR in $SOURCE
  do
    TMP=" --include  ${CDIR}"
    INCLUDE=${INCLUDE}${TMP}
  done
  # perform an incremental backup to root, include directories, exclude everything else, / as reference.
  duplicity --full-if-older-than 30D $INCLUDE --exclude '**' / $BUCKET > $LOGFILE
  if [ -n "$EMAIL" ]; then
    mail -s "backup report" $EMAIL < $LOGFILE
  fi
}

list() {
  duplicity list-current-files $BUCKET
}

restore() {
  if [ $# = 2 ]; then
    duplicity restore --file-to-restore $1 $BUCKET $2
  else
    duplicity restore --file-to-restore $1 --time $2 $BUCKET $3
  fi
}

status() {
  duplicity collection-status $BUCKET
}

if [ "$1" = "backup" ]; then
  backup
elif [ "$1" = "list" ]; then
  list
elif [ "$1" = "restore" ]; then
  if [ $# = 3 ]; then
    restore $2 $3
  else
    restore $2 $3 $4
  fi
elif [ "$1" = "status" ]; then
  status
else
  echo "
  duptools - manage duplicity backup

  USAGE:

  ./duptools.sh backup 
  ./duptools.sh list
  ./duptools.sh status
  ./duptools.sh restore file [time] dest
  "
fi

export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PASSPHRASE=

Installation

Set up config vars at the top of the script and make the script executable.

Backup

$ ./duptools.sh backup

List/Status

$ ./duptools.sh list
$ ./duptools.sh status

Restore

Be careful while restoring not to preprend a slash to the path.
Restoring a single file to tmp
$ ./duptools.sh restore home/yegorg/bin/setupscreen tmp/setupscreen
Restoring an older version of a directory to tmp (interval or full date)
$ ./duptools.sh  restore home/yegorg/bin 1D3h5s tmp/bin
$ ./duptools.sh  restore home/yegorg/bin 2012/7/5 tmp/bin

No comments :

Post a Comment

cPanel - Domain already exists error while adding a subdomain or addon domain

A very common error  when adding an addon or parked domain via cpanel. Unfortunately, the error they see is usually very generic and doesn’t provide any information such as “can not create domain”. This is usually due to the domain name existing anywhere in the cPanel configuration and checking these locations/steps out should help you find the domain that is lurking somewhere and causing the issues.

Reason 1: There is an existing zone file on the server.

1) You can use the following command to check whether zone exists or not,


# dig @server_ip domain.com

If there is a zone file exists, this will show the A record of the domain.com.

2) If there is a zone file that exists, log into the server that the zone file is pointing to and make sure the domain doesn’t exist:

/scripts/whoowns domain.com

If it does, you would need to remove this prior to adding their new addon domain. If it does not, continue on

3) Remove the zone file from the server by running the following command:

/scripts/killdns domain.com

This will remove the DNS zone and will help you add the addon or parked domain again.

Reason 2: There are old traces of the domain on the server

1) Log into the server where the customer is seeing problems adding the domain and confirm that the domain does not exist on the server.

/scripts/whoowns domain.com

2) Check Cpanel files for traces of the problem domain name

grep domain.com /var/cpanel/users/*

grep -R domain.com /var/cpanel/userdata/*

3) Edit any files that are found and remove the traces of the domain name the customer is trying to add. You also may need to remove the entire file for the domain in the /var/cpanel/userdata/USERNAME/ directory.

 4) Rebuild the user domains database

/scripts/updateuserdomains

5) Rebuild the Apache configuration and make sure apache is running with all traces of the bad domain removed.

/scripts/rebuildhttpdconf 
# service httpd restart

This should have all traces that were left behind from when this domain name was removed in the past and then will no longer cause a conflict when the customer tries to add the domain again.

No comments :

Post a Comment

HOWTO: Format date output under FreeBSD

How do I format date to display on screen on for my shell scripts as per my requirements on Linux or *BSD operating systems?

You need to use the standard date command to format date or time. You can use the same command with the shell script.

Syntax

date +"%FORMAT"
date +"%FORMAT%FORMAT"
date +"%FORMAT-%FORMAT"
Open a terminal and type the following date command:
date -j +"%Y-%m-%d %H:%M:%S"
Sample output:
2014-08-18 16:57:59

A complete list of FORMAT control characters supported by the date command

FORMAT controls the output. It can be the combination of any one of the following:
%FORMAT StringDescription
%%a literal %
%alocale's abbreviated weekday name (e.g., Sun)
%Alocale's full weekday name (e.g., Sunday)
%blocale's abbreviated month name (e.g., Jan)
%Blocale's full month name (e.g., January)
%clocale's date and time (e.g., Thu Mar 3 23:05:25 2005)
%Ccentury; like %Y, except omit last two digits (e.g., 21)
%dday of month (e.g, 01)
%Ddate; same as %m/%d/%y
%eday of month, space padded; same as %_d
%Ffull date; same as %Y-%m-%d
%glast two digits of year of ISO week number (see %G)
%Gyear of ISO week number (see %V); normally useful only with %V
%hsame as %b
%Hhour (00..23)
%Ihour (01..12)
%jday of year (001..366)
%khour ( 0..23)
%lhour ( 1..12)
%mmonth (01..12)
%Mminute (00..59)
%na newline
%Nnanoseconds (000000000..999999999)
%plocale's equivalent of either AM or PM; blank if not known
%Plike %p, but lower case
%rlocale's 12-hour clock time (e.g., 11:11:04 PM)
%R24-hour hour and minute; same as %H:%M
%sseconds since 1970-01-01 00:00:00 UTC
%Ssecond (00..60)
%ta tab
%Ttime; same as %H:%M:%S
%uday of week (1..7); 1 is Monday
%Uweek number of year, with Sunday as first day of week (00..53)
%VISO week number, with Monday as first day of week (01..53)
%wday of week (0..6); 0 is Sunday
%Wweek number of year, with Monday as first day of week (00..53)
%xlocale's date representation (e.g., 12/31/99)
%Xlocale's time representation (e.g., 23:13:48)
%ylast two digits of year (00..99)
%Yyear
%z+hhmm numeric timezone (e.g., -0400)
%:z+hh:mm numeric timezone (e.g., -04:00)
%::z+hh:mm:ss numeric time zone (e.g., -04:00:00)
%:::znumeric time zone with : to necessary precision (e.g., -04, +05:30)
%Zalphabetic time zone abbreviation (e.g., EDT)


No comments :

Post a Comment

Timing HTTP Requests with cURL

Sometimes you just need to quickly benchmark how fast a page can be loaded (or fetched to be more precise). For these cases, cURL is a great option for timing HTTP requests.
$ curl -s -w "%{time_total}\n" -o /dev/null http://www.google.com/
0.095
Want a few more datapoints?? Thanks to ZSH, it’s easy to just loop around it:
$ for i in {1..3}; curl -s -w "%{time_total}\n" -o /dev/null http://www.google.com/
0.507
0.077
0.077
And if you’re a bash lover:
$ for i in {1..3};do curl -s -w "%{time_total}\n" -o /dev/null http://www.google.com/; done
1.079
0.124
0.106
Default behavior on cURL is GET, but you can do POST, DELETE, PUT and more complex requests. If you’re not familiar with cURL, best place to start is the manpage.
Besides “time_total”, curl also provides other timing, like “time_namelookup”, “time_connect”, etc. Checking a post by Joseph, I remembered that curl supports formatted output. This way we can create a “template” for our HTTP timing test:
12345678910
\n
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
\n
view rawcurl-format hosted with ❤ by GitHub
Assuming the format file is named “curl-format”, we can execute a request:
$ curl -w "@curl-format" -o /dev/null -s http://www.google.com/
            time_namelookup:  0.416
               time_connect:  0.435
            time_appconnect:  0.000
           time_pretransfer:  0.435
              time_redirect:  0.000
         time_starttransfer:  0.488
                            ----------
                 time_total:  0.491
Where:
  • -w “@curl-format” tells cURL to use our format file
  • -o /dev/null redirects the output of the request to /dev/null
  • -s tells cURL not to show a progress bar
  • http://www.google.com/ is the URL we are requesting
The timings are DNS lookup, TCP connect, pre-transfer negotiations, start to end of transfer, redirects (in this case there were none), time to first byte, and total time (last byte), respectively.
Looking for something a bit more “complete”? You can always try Apache Benchmark:


$ ab -n 3 http://www.google.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking www.google.com (be patient).....done
Server Software: gws
Server Hostname: www.google.com
Server Port: 80
Document Path: /
Document Length: 10928 bytes
Concurrency Level: 1
Time taken for tests: 0.231 seconds
Complete requests: 3
Failed requests: 2
(Connect: 0, Receive: 0, Length: 2, Exceptions: 0)
Write errors: 0
Total transferred: 35279 bytes
HTML transferred: 32984 bytes
Requests per second: 12.99 [#/sec] (mean)
Time per request: 76.999 [ms] (mean)
Time per request: 76.999 [ms] (mean, across all concurrent requests)
Transfer rate: 149.15 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 19 21 1.8 22 22
Processing: 50 56 5.3 59 61
Waiting: 46 51 4.0 53 53
Total: 73 77 5.0 79 82
Percentage of the requests served within a certain time (ms)
50% 76
66% 76
75% 82
80% 82
90% 82
95% 82
98% 82
99% 82
100% 82 (longest request)

No comments :

Post a Comment