Yarn audit fix

Yarn doesn’t have the ability to fix the problems it finds in a security audit (like npm does). There is a workaround that I found on a github thread though:

npm install
npm audit fix --force # breaking changes
rm yarn.lock
yarn import
yarn audit
rm package-lock.json

It’s not pretty but it does the job.

Fixing Twig deprecations in Symfony 4.4

I recently update to Symfony 4.4 and had to work through a few deprecations. Some were straight forward, some were not. This Twig one was not:

The "twig.exception_controller" configuration key has been deprecated in Symfony 4.4, set it to "null" and use "framework.error_controller" configuration key instead.

This was resolved by adding the following to config/packages/twig.yaml:

exception_controller: null

Running Akamai Sandbox in Docker with HTTPS

Akamai’s new Sandbox can be run on local development environments, so you can test changes in development with production like CDN settings. This allows you to more quickly identify issues before rolling out to production.

The Akamai sandbox (or DevPoPs) is a Java app (see https://bit.ly/aka-sb-gh). This Java app can be containerised for portability/ease of setup and use.

I created a simple docker compose setup (https://github.com/NoodlesNZ/devpops-test).

This can be used with a real certificate (which is signed by a CA), but works as well with a self signed certificate using (on a Mac):

openssl req \
    -newkey rsa:2048 \
    -x509 \
    -nodes \
    -keyout server.key \
    -new \
    -out server.crt \
    -subj /CN=www.example.com \
    -reqexts SAN \
    -extensions SAN \
    -config <(cat /System/Library/OpenSSL/openssl.cnf \
        <(printf '[SAN]\nsubjectAltName=DNS:www.example.com')) \
    -sha256 \
    -days 3650

This is included in the conf/config.json file:

{
  "connectorServerInfo": {
    "secure": true,
    "port": 443,
    "host": "0.0.0.0",
    "cert": {
      "certChainPath": "./conf/server.crt",
      "keyPath": "./conf/server.key"
    }
  },
  "originMappings": [
    {
      "from": "",
      "to": {
        "secure": true,
        "port": 8443,
        "host": "host.docker.internal"
      }
    }
  ],
  "jwt": ""
}

Explaining a few options in the config.json file.

In the connectorServerInfo section:
- secure: true - enables https
- port: 443 - listens on port 443
- host: 0.0.0.0 - bind to all ip addresses (needed for docker as binding to 127.0.0.1 doesn't work)
- cert - public/private key as generated with openssl

In the originMappings section:
- from: - the origin hostname in your Akamai property, e.g. origin.example.com
- to - the local/development origin
- secure: true - enabled https on the new origin
- port: 8443 - As the Sandbox is now listening on port 443, the origin needs to be on a different port
- host: host.docker.internal - special docker hostname on mac, which resolves to the host's ip address. This assumes that your dev server is also hosted on your mac.

This setup can also be incorporated into an existing docker compose setup, e.g.

version: '2'
services:
  web:
    image: example/web:latest
    networks:
      - appnet
  devpops:
    image: noodlesnz/devpops:latest
    volumes:
      - ./conf:/opt/devpops/conf
    ports:
      - 443:443
    networks:
      - appnet
networks:
  appnet:
    driver: "bridge"

With web and devpops sharing the same docker network, you can use the host "web" with your config.json, e.g.

{
  "connectorServerInfo": {
    "secure": true,
    "port": 443,
    "host": "0.0.0.0",
    "cert": {
      "certChainPath": "./conf/server.crt",
      "keyPath": "./conf/server.key"
    }
  },
  "originMappings": [
    {
      "from": "",
      "to": {
        "secure": true,
        "port": 443,
        "host": "web"
      }
    }
  ],
  "jwt": ""
}

This also means that the development origin can only be accessed through the Akamai Sandbox, as web doesn't have any ports exposed.

ipv6 workaround for Unifi USG on 2 Degrees UFB

I had an issue where our USG Pro was not getting ipv6 from 2 Degrees UFB after upgrading our Controller to 5.8. After a lot of messing around with this, I found a workaround originally posted here: https://community.ubnt.com/t5/UniFi-Routing-Switching/USG-DHCPv6-PD-bug-when-using-PPPoE/td-p/2487710.

Digging into the dhcpv6-pd logs shows this:

Dumping out the config I saw that I had two dhcpv6-pd blocks, one under interface eth2 vif 10 and the other under interface eth2 vif 10 pppoe 2 (where it should be):

It was possible to temporarily fix this issue by removing the first block:

This allowed our USG to get ipv6 from our ISP and now all the clients on the network also got ipv6.

To make this more permanent I had to add this file on the USG under /config/scripts/post-config.d/dhcp.sh

To run this script, I added the following on the Controller in /usr/lib/unifi/data/sites/default/config.gateway.json:

This runs the dhcp.sh script 2 minutes after provisioning and then the script removes the scheduled task (as it only needs to run once).

Speed up Jenkins phpcs (PHP CodeSniffer)

PHP CodeSniffer in our Jenkins CI was always one of the slowest tasks as it ran across our whole code base. LB Denker from Etsy wrote a good piece of software called CSRunner which looked to solve this problem by only running phpcs on files that had changed in the last 7 days (or so). It is written as a PHP script that was run from Jenkins.

I took this idea and adapted it to run in Ant. Instead of looking at files changed in x days, it looks at the checkstyle report from the last run and gets a list of files with problems. It merges this with any files that have changed since the last build. In theory it should bring the run time down (assuming you have a low number of files with problems).

I’m open to any ideas on how to improve this as I’m not that experienced with Ant.

Show MySQL engine tablespace size

I have been trying to migrate everything in MySQL to use INNODB (death to all MyISAM), but was unsure of how much data was being stored in each storage engine. You can use the following query to give a total usage for all engines:

SELECT ENGINE, CONCAT(FORMAT(RIBPS/POWER(1024,pw),2),SUBSTR(' KMGT',pw+1,1)) `Usage` FROM
(
    SELECT ENGINE,RIBPS,FLOOR(LOG(RIBPS)/LOG(1024)) pw
    FROM
    (
        SELECT ENGINE, SUM(data_length+index_length) RIBPS
		FROM information_schema.tables AAA
		GROUP BY ENGINE
		HAVING RIBPS != 0
    ) AA
) A;

Now I have that information I can adjust my INNODB buffers and reduce MyISAM caches

Speeding up percona xtrabackup restores

I started playing around with using xtrabackup (or more specifically innobackupex) to backup MySQL. Most of our tables are now innodb so it didn’t make sense to keep dumping everything out via mysqldump.

I had a clone of our master db server in our virtual environment that I was trying to restore the backup onto, but it was taking hours (using innobackupex –copy-back /backup/). I figured that the IO on my virtual servers was just crap and I’d have to grin and bear it. There doesn’t seem to be much around about restoring using innobackupex, even the command options are limited for restores so I thought –copy-back was the only way.

It seems that if your backup is on the same filesystem as where it’s going to end up then it’s a lot faster to use the –move-back option. This changed my restore time from hours to seconds.

e.g.
innobackupex –move-back /backup/

Validator.nu RPM

I’ve been playing around with validator.nu the last few days. I have been trying to get a standalone version working so I could package it up and puppetize it. Unfortunately a lot of the standalone jar builders failed (java hell).

I finally found that it’s been released here: https://github.com/validator/validator.github.io

I whipped up a basic rpm to use this and install an init script etc: https://github.com/NoodlesNZ/validator-nu-rpm

IPv6 in NZ

I just a quick survey of the top 500 sites in NZ (based on Alexa data) and I was disappointed to see that only two NZ based sites (excluding Google, Microsoft, Facebook etc) supported IPv6, geekzone.co.nz and nzsale.co.nz (Geekzone implemented its IPv6 via Cloudflare and NZ Sale through Akamai).

Come on people, it’s 2014. There’s no excuse not to support IPv6, especially with two RIRs on their last /8 and APNIC with ~13.5 million addresses remaining. What’s really worrying is that some of the major ISPS (Telecom, Vodafone, Orcon) don’t even have IPv6 on their public facing websites. I’d guess that their residential customers won’t be seeing IPv6 on their connections anytime soon and that CGN is a real possibility.

GPT Passback tags and validation errors

Google released new passback tags for GPT (DFP) a while back. While these tags appear to work, they don’t comply with W3C standards. E.g

<script src="//www.googletagservices.com/tag/js/gpt.js">
googletag.pubads().definePassback('/7146/adunit', [300, 250]).display();
</script>

It seems you can have either src or text between the tags, not both. It generates this error:

line 533 column 9 – Error: The text content of element script was not in the required format: Expected space, tab, newline, or slash but found g instead.

I’m unsure of a solution at the moment. I have raised this issue with our account manager, but I don’t expect any fixes anytime soon.

Another day, another vBulletin code error

It seems that vBulletin doesn’t test on PHP 5.4 or 5.5 these days. Either that or they’re happy to just suppress errors rather than actually fix them.

I upgraded my forum today to vBulletin 4.2.2 and noticed these errors on a search page:

Warning: Declaration of vBForum_Item_SocialGroupMessage::getLoadQuery() should be compatible with vB_Model::getLoadQuery($required_query = ”, $force_rebuild = false) in …./packages/vbforum/item/socialgroupmessage.php on line 261

Warning: Declaration of vBForum_Item_SocialGroupDiscussion::getLoadQuery() should be compatible with vB_Model::getLoadQuery($required_query = ”, $force_rebuild = false) in …./packages/vbforum/item/socialgroupdiscussion.php on line 337

Luckily a user on vbulletin.com support forum has a fix: http://www.vbulletin.com/forum/forum/vbulletin-4/vbulletin-4-questions-problems-and-troubleshooting/4000233-warning-declaration-of-vbforum_item_socialgroupmessage?p=4000793#post4000793

What annoys me is that vBulletin released this version a while ago, but are still distributing it with this code error.

Xalan segfault

When using xalan-c 1.10 and the supporting package xerces-c (3.0.1) from EPEL, Xalan would segfault when transforming xml with xslt. E.g.

[[email protected] generate]# Xalan test.xml test.xsl
 Segmentation fault (core dumped)

/var/log/messages didn’t have any helpful information:

Sep 30 17:52:01 box kernel: Xalan[25236]: segfault at 18 ip 00007f5b44758cb9 sp 00007fffa8ff33d0 error 4 in libxalan-c.so.110.0[7f5b444d3000+3e2000]

There seems to be a bug open for this at EPEL (Bug 807816 – Xalan-c segfaults on any input), but it has not been acknowledged or worked on.

I traced the problem to an incompatibility between xalan-c 1.10 and xerces-c 3.x. There is a patch as part of the EPEL xalan-c rpm which is meant to allow for this, but it seems broken as the source rpm didn’t compile for me.

An easy fix here is to upgrade both xalan-c and xerces-c to the latest version. I hacked together rpms for these based on the work already done in EPEL:

xalan-c-1.11.0-1.el6.src.rpm
xerces-c-3.1.1-1.el6.src.rpm

After initial testing it seems that this fixes the problem and XML can now be transformed in Xalan with XSLT

OpenSSL 1.0.1 for RHEL/CentOS 6.x

This is a great page on how to build OpenSSL 1.0.1 for RHEL/CentOS 6.x:

https://www.ptudor.net/linux/openssl/

This has been ported from the work done on Fedora OpenSSL (https://admin.fedoraproject.org/pkgdb/acls/name/openssl and http://pkgs.fedoraproject.org/cgit/openssl.git/). FIPS has been removed for ease of compiling.

Have been running this on our dev environment for a few days and seems to be secure. OpenSSL 1.0.1 adds TLS 1.1/1.2 (changelog here: http://www.openssl.org/news/changelog.html)

Edgecast Transact

So Edgecast has announced a new product today, a separate CDN just for e-commerce sites.

As Telecom Ramblings puts it:

The new network is based on their existing CDN technology, but built on an entirely separate network infrastructure tuned specifically for the site acceleration and transaction needs of online retail sites. In other words, it’s aimed at enterprises tired of sharing a least-common-demoninator fast lane with everything from cute cat videos to gaming updates to whatever it is kids listen to these days.

Am I the only one who reads this as a lack of confidence in their core CDN product or are they trying to differentiate themselves from other CDNs? To me a CDN should be able to handle any traffic that you throw at it and if you are getting slow downs then it’s time to find a new CDN.

I would rather my CDN put more time and money into their core product than branching off and building a completely separate network. What’s next, a sports CDN? News CDN? Porn CDN?

Chorus Cabinet locations

Here are the list of the existing Chorus (previously Telecom NZ) cabinets: http://www.chorus.co.nz/file/3194/existing_distribution_cabinet_list_may_2012.xlsx

The coordinates of the cabinets are in NZ Map Grid format and can be converted here: http://apps.linz.govt.nz/coordinate-conversion/index.aspx?IS=NZMG&OS=WGS84&IO=NE&IC=H&IH=-&OO=NE&OC=H&OH=-&PN=N&IF=T&ID=+&OF=H&OD=+&CI=Y&do_entry=Enter+coordinates&DEBUG=&ADVANCED=0 (with X being the Easting and Y being the Northing)

Kraken Crawler

Recently I have seen “kraken-crawler/0.2.0” hitting my site. This is a bot used by Kontera (advertising company) to “better understand and analyze your site’s content” (according to their support staff).

Apparently the crawler adheres to robots.txt so you can block it by adding:

User-agent: kraken-crawler/*
Disallow: /

The URL to their crawler info is broken so it’s hard to get an idea of what this is used for. If you are also seeing this bot, hopefully this helps you.

s3fs/fuse on Centos/RHEL

s3fs requires fuse 2.8.4, but on RHEL the latest version is 2.8.3, so fuse needs to be installed from source code.

yum remove fuse fuse* fuse-devel
yum install gcc libstdc++-devel gcc-c++ curl curl* curl-devel libxml2 libxml2* libxml2-devel openssl-devel mailcap

wget “https://downloads.sourceforge.net/project/fuse/fuse-2.X/2.8.4/fuse-2.8.4.tar.gz?r=&ts=1299709935&use_mirror=cdnetworks-us-1”
tar -xzf fuse-2.8.4.tar.gz
cd fuse-2.8.4/
./configure –prefix=/usr
make
make install
export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64/pkgconfig/
ldconfig
modprobe fuse
pkg-config –modversion fuse
cd ../
wget http://s3fs.googlecode.com/files/s3fs-1.63.tar.gz
tar -xzf s3fs-1.63.tar.gz
cd s3fs-1.63
./configure –prefix=/usr
make
make install

If when reinstalling s3fs you get this error:

No package ‘fuse’ found

You need to re-run this before compiling s3fs

export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64/pkgconfig/
ldconfig
modprobe fuse
pkg-config –modversion fuse

IE10 for Windows 7 (a tale of SPOF)

Microsoft have finally released IE10 for Windows 7. It seems their download page (http://windows.microsoft.com/en-us/internet-explorer/downloads/ie-10/worldwide-languages) is getting pretty hammered. Looking at the requests on the page it seems that everything is held up with a request to ajax.microsoft.com. The page loads the template header, but no more. Surely in the day and age a company like Microsoft would load their ajax async and prevent a single script from taking down the page.

Update: it seems this is a problem with the latest build of Firefox’s Aurora. Twitter is experiencing a similar problem with one of their scripts, so there may be a problem with Firefox’s script engine.

AWS OpsWorks

Amazon have released their new application management tool OpsWorks. This uses Chef to deploy and maintain instances on AWS. While it looks neat and I’m sure it will work for startups it’s not something I could trust. I still like to get my hands dirty with server deployment and I try to use bare metal rather than virtual instances where possible. Also, from what I’m reading this tool is still very much a “beta” and is quite buggy.

The tool itself is not revolutionary, there are many other systems out there that do a similar thing. What is interesting though is that Amazon is offering this, once again improving the tools available without the need to use a 3rd party. Will this kill off competition or prompt the current providers to lift their game?

OpsWorks has brought up an interesting question. Now that AWS is using Chef and they have thousands of developers/sites using them, will Chef become the defacto standard and will other configuration management systems die out? There is a rumour that Amazon might offer Puppet support along side Chef, but that’s just a rumour for now.

Personally I think Chef will increase in popularity due to OpsWorks, but I don’t think Puppet et al will die away. Each system has their own merits and devs/ops will use whatever suits them and their environment.

rpmrebuild ftw!

There’s always been a problem with Oracle provided MySQL rpms and older Centos/RHEL MySQL rpms. The former provides “MySQL” and the latter provides “mysql”, so a lot of the packages in Centos/RHEL require “mysql” which creates some conflicts.

A quick way to fix this is to use rpmrebuild -e -p and change the “requires” from “mysql” to “MySQL”. Hopefully in the future Centos/RHEL will be standardized with the Oracle naming convention or Oracle packages be “backwardly” compatible.

New server with SSDs

We just provisioned a new server with Sandy Bridge and 4 SSDs in RAID 5 configuration. The server it was replacing was seriously under powered so this is a timely replacement. I ran hdparm on both servers to compare:

Old Server:
dag:/home# hdparm -Tt /dev/sda6

/dev/sda6:
Timing cached reads: 6678 MB in 2.00 seconds = 3341.64 MB/sec
Timing buffered disk reads: 186 MB in 3.03 seconds = 61.38 MB/sec

New Server:
[email protected]:/home# hdparm -Tt /dev/sda6

/dev/sda6:
Timing cached reads: 25048 MB in 2.00 seconds = 12539.88 MB/sec
Timing buffered disk reads: 1956 MB in 3.00 seconds = 651.75 MB/sec

I’ll be rolling out more of these when other servers are up for replacement.

Vbulletin 4.2.x and PHP 5.4

It seems that the latest versions of vbulletin are very broken in PHP 5.4 even though they state that “vBulletin 4.x requires PHP 5.2.0 or greater and MySQL 4.1.0 or greater”

Most of the problems are from E_STRICT which is part of E_ALL in PHP 5.4, but vBulletin and Internet Brands (who own vBulletin) seem very slow to fix these problems. They even denied that it was a problem with vBulletin when I originally reported some of the errors in June 2012 stating “Closing this issue because it appears to be unrelated to vBulletin code.”

They have since reopened the issue and it has been rolled up in a PHP 5.4 check task, but seems quite slow being that PHP 5.4 was released nearly a year ago and PHP 5.5 is due out soon.

So to get vBulletin working without errors on my sites I have to modify and fix all of these problems. I wish I could contribute back to vBulletin or to its users so that this effort is not duplicated, but there doesn’t seem to be a way to do it (hosting files on here would violate copyright).

Innodb Recovery

I recently had a database server fail during a large DELETE query, this caused some problems with innodb’s ibdata1. The index of this data file was different to what MySQL expected. As this wasn’t one of our main servers I hadn’t tuned innodb and all the innodb data was in the single ibdata1 file. The only way for me to start MySQL was to add this to my.cnf:

innodb_force_recovery = 4

This forced MySQL to ignore all innodb errors and I used mysqldump to extract all the data from the innodb tables. Innodb tables were found using the following query:

SELECT table_schema, table_name
FROM INFORMATION_SCHEMA.TABLES
WHERE engine = ‘innodb’;

I stopped MySQL server again, removed the innodb_force_recovery, deleted the ibdata1 file and tuned innodb. I also made sure I added this to my.cnf:

innodb_file_per_table = 1
innodb_log_files_in_group      = 2

All tables were loaded from the mysqldump backup files and everything is all happy again.