PHP CodeSniffer in our Jenkins CI was always one of the slowest tasks as it ran across our whole code base. LB Denker from Etsy wrote a good piece of software called CSRunner which looked to solve this problem by only running phpcs on files that had changed in the last 7 days (or so). It is written as a PHP script that was run from Jenkins.
I took this idea and adapted it to run in Ant. Instead of looking at files changed in x days, it looks at the checkstyle report from the last run and gets a list of files with problems. It merges this with any files that have changed since the last build. In theory it should bring the run time down (assuming you have a low number of files with problems).
I’m open to any ideas on how to improve this as I’m not that experienced with Ant.
I have been trying to migrate everything in MySQL to use INNODB (death to all MyISAM), but was unsure of how much data was being stored in each storage engine. You can use the following query to give a total usage for all engines:
SELECT ENGINE, CONCAT(FORMAT(RIBPS/POWER(1024,pw),2),SUBSTR(' KMGT',pw+1,1)) `Usage` FROM
SELECT ENGINE,RIBPS,FLOOR(LOG(RIBPS)/LOG(1024)) pw
SELECT ENGINE, SUM(data_length+index_length) RIBPS
FROM information_schema.tables AAA
GROUP BY ENGINE
HAVING RIBPS != 0
Now I have that information I can adjust my INNODB buffers and reduce MyISAM caches
All the cool kids are doing it, so I’m playing around with enabling SSL by default with HSTS. Thanks to CloudFlare and StartSSL it’s been mostly without a hiccup.
I’ve been playing around with validator.nu the last few days. I have been trying to get a standalone version working so I could package it up and puppetize it. Unfortunately a lot of the standalone jar builders failed (java hell).
I finally found that it’s been released here: https://github.com/validator/validator.github.io
I whipped up a basic rpm to use this and install an init script etc: https://github.com/NoodlesNZ/validator-nu-rpm
I just a quick survey of the top 500 sites in NZ (based on Alexa data) and I was disappointed to see that only two NZ based sites (excluding Google, Microsoft, Facebook etc) supported IPv6, geekzone.co.nz and nzsale.co.nz (Geekzone implemented its IPv6 via Cloudflare and NZ Sale through Akamai).
Come on people, it’s 2014. There’s no excuse not to support IPv6, especially with two RIRs on their last /8 and APNIC with ~13.5 million addresses remaining. What’s really worrying is that some of the major ISPS (Telecom, Vodafone, Orcon) don’t even have IPv6 on their public facing websites. I’d guess that their residential customers won’t be seeing IPv6 on their connections anytime soon and that CGN is a real possibility.
Google released new passback tags for GPT (DFP) a while back. While these tags appear to work, they don’t comply with W3C standards. E.g
googletag.pubads().definePassback('/7146/adunit', [300, 250]).display();
It seems you can have either src or text between the tags, not both. It generates this error:
line 533 column 9 – Error: The text content of element script was not in the required format: Expected space, tab, newline, or slash but found g instead.
I’m unsure of a solution at the moment. I have raised this issue with our account manager, but I don’t expect any fixes anytime soon.
It seems that vBulletin doesn’t test on PHP 5.4 or 5.5 these days. Either that or they’re happy to just suppress errors rather than actually fix them.
I upgraded my forum today to vBulletin 4.2.2 and noticed these errors on a search page:
Warning: Declaration of vBForum_Item_SocialGroupMessage::getLoadQuery() should be compatible with vB_Model::getLoadQuery($required_query = ”, $force_rebuild = false) in …./packages/vbforum/item/socialgroupmessage.php on line 261
Warning: Declaration of vBForum_Item_SocialGroupDiscussion::getLoadQuery() should be compatible with vB_Model::getLoadQuery($required_query = ”, $force_rebuild = false) in …./packages/vbforum/item/socialgroupdiscussion.php on line 337
Luckily a user on vbulletin.com support forum has a fix: http://www.vbulletin.com/forum/forum/vbulletin-4/vbulletin-4-questions-problems-and-troubleshooting/4000233-warning-declaration-of-vbforum_item_socialgroupmessage?p=4000793#post4000793
What annoys me is that vBulletin released this version a while ago, but are still distributing it with this code error.
When using xalan-c 1.10 and the supporting package xerces-c (3.0.1) from EPEL, Xalan would segfault when transforming xml with xslt. E.g.
[[email protected] generate]# Xalan test.xml test.xsl
Segmentation fault (core dumped)
/var/log/messages didn’t have any helpful information:
Sep 30 17:52:01 box kernel: Xalan: segfault at 18 ip 00007f5b44758cb9 sp 00007fffa8ff33d0 error 4 in libxalan-c.so.110.0[7f5b444d3000+3e2000]
There seems to be a bug open for this at EPEL (Bug 807816 – Xalan-c segfaults on any input), but it has not been acknowledged or worked on.
I traced the problem to an incompatibility between xalan-c 1.10 and xerces-c 3.x. There is a patch as part of the EPEL xalan-c rpm which is meant to allow for this, but it seems broken as the source rpm didn’t compile for me.
An easy fix here is to upgrade both xalan-c and xerces-c to the latest version. I hacked together rpms for these based on the work already done in EPEL:
After initial testing it seems that this fixes the problem and XML can now be transformed in Xalan with XSLT
This is a great page on how to build OpenSSL 1.0.1 for RHEL/CentOS 6.x:
This has been ported from the work done on Fedora OpenSSL (https://admin.fedoraproject.org/pkgdb/acls/name/openssl and http://pkgs.fedoraproject.org/cgit/openssl.git/). FIPS has been removed for ease of compiling.
Have been running this on our dev environment for a few days and seems to be secure. OpenSSL 1.0.1 adds TLS 1.1/1.2 (changelog here: http://www.openssl.org/news/changelog.html)
So Edgecast has announced a new product today, a separate CDN just for e-commerce sites.
As Telecom Ramblings puts it:
The new network is based on their existing CDN technology, but built on an entirely separate network infrastructure tuned specifically for the site acceleration and transaction needs of online retail sites. In other words, it’s aimed at enterprises tired of sharing a least-common-demoninator fast lane with everything from cute cat videos to gaming updates to whatever it is kids listen to these days.
Am I the only one who reads this as a lack of confidence in their core CDN product or are they trying to differentiate themselves from other CDNs? To me a CDN should be able to handle any traffic that you throw at it and if you are getting slow downs then it’s time to find a new CDN.
I would rather my CDN put more time and money into their core product than branching off and building a completely separate network. What’s next, a sports CDN? News CDN? Porn CDN?
If anyone is looking for a tutor for their child in Auckland, check out 121 tutors. They do an awesome job of matching the right tutor to your child’s needs and learning style.
Recently I have seen “kraken-crawler/0.2.0” hitting my site. This is a bot used by Kontera (advertising company) to “better understand and analyze your site’s content” (according to their support staff).
Apparently the crawler adheres to robots.txt so you can block it by adding:
The URL to their crawler info is broken so it’s hard to get an idea of what this is used for. If you are also seeing this bot, hopefully this helps you.
s3fs requires fuse 2.8.4, but on RHEL the latest version is 2.8.3, so fuse needs to be installed from source code.
yum remove fuse fuse* fuse-devel
yum install gcc libstdc++-devel gcc-c++ curl curl* curl-devel libxml2 libxml2* libxml2-devel openssl-devel mailcap
tar -xzf fuse-2.8.4.tar.gz
pkg-config –modversion fuse
tar -xzf s3fs-1.63.tar.gz
If when reinstalling s3fs you get this error:
No package ‘fuse’ found
You need to re-run this before compiling s3fs
pkg-config –modversion fuse
Microsoft have finally released IE10 for Windows 7. It seems their download page (http://windows.microsoft.com/en-us/internet-explorer/downloads/ie-10/worldwide-languages) is getting pretty hammered. Looking at the requests on the page it seems that everything is held up with a request to ajax.microsoft.com. The page loads the template header, but no more. Surely in the day and age a company like Microsoft would load their ajax async and prevent a single script from taking down the page.
Update: it seems this is a problem with the latest build of Firefox’s Aurora. Twitter is experiencing a similar problem with one of their scripts, so there may be a problem with Firefox’s script engine.
Amazon have released their new application management tool OpsWorks. This uses Chef to deploy and maintain instances on AWS. While it looks neat and I’m sure it will work for startups it’s not something I could trust. I still like to get my hands dirty with server deployment and I try to use bare metal rather than virtual instances where possible. Also, from what I’m reading this tool is still very much a “beta” and is quite buggy.
The tool itself is not revolutionary, there are many other systems out there that do a similar thing. What is interesting though is that Amazon is offering this, once again improving the tools available without the need to use a 3rd party. Will this kill off competition or prompt the current providers to lift their game?
OpsWorks has brought up an interesting question. Now that AWS is using Chef and they have thousands of developers/sites using them, will Chef become the defacto standard and will other configuration management systems die out? There is a rumour that Amazon might offer Puppet support along side Chef, but that’s just a rumour for now.
Personally I think Chef will increase in popularity due to OpsWorks, but I don’t think Puppet et al will die away. Each system has their own merits and devs/ops will use whatever suits them and their environment.
There’s always been a problem with Oracle provided MySQL rpms and older Centos/RHEL MySQL rpms. The former provides “MySQL” and the latter provides “mysql”, so a lot of the packages in Centos/RHEL require “mysql” which creates some conflicts.
A quick way to fix this is to use rpmrebuild -e -p and change the “requires” from “mysql” to “MySQL”. Hopefully in the future Centos/RHEL will be standardized with the Oracle naming convention or Oracle packages be “backwardly” compatible.
We just provisioned a new server with Sandy Bridge and 4 SSDs in RAID 5 configuration. The server it was replacing was seriously under powered so this is a timely replacement. I ran hdparm on both servers to compare:
dag:/home# hdparm -Tt /dev/sda6
Timing cached reads: 6678 MB in 2.00 seconds = 3341.64 MB/sec
Timing buffered disk reads: 186 MB in 3.03 seconds = 61.38 MB/sec
[email protected]:/home# hdparm -Tt /dev/sda6
Timing cached reads: 25048 MB in 2.00 seconds = 12539.88 MB/sec
Timing buffered disk reads: 1956 MB in 3.00 seconds = 651.75 MB/sec
I’ll be rolling out more of these when other servers are up for replacement.
It seems that the latest versions of vbulletin are very broken in PHP 5.4 even though they state that “vBulletin 4.x requires PHP 5.2.0 or greater and MySQL 4.1.0 or greater”
Most of the problems are from E_STRICT which is part of E_ALL in PHP 5.4, but vBulletin and Internet Brands (who own vBulletin) seem very slow to fix these problems. They even denied that it was a problem with vBulletin when I originally reported some of the errors in June 2012 stating “Closing this issue because it appears to be unrelated to vBulletin code.”
They have since reopened the issue and it has been rolled up in a PHP 5.4 check task, but seems quite slow being that PHP 5.4 was released nearly a year ago and PHP 5.5 is due out soon.
So to get vBulletin working without errors on my sites I have to modify and fix all of these problems. I wish I could contribute back to vBulletin or to its users so that this effort is not duplicated, but there doesn’t seem to be a way to do it (hosting files on here would violate copyright).
I recently had a database server fail during a large DELETE query, this caused some problems with innodb’s ibdata1. The index of this data file was different to what MySQL expected. As this wasn’t one of our main servers I hadn’t tuned innodb and all the innodb data was in the single ibdata1 file. The only way for me to start MySQL was to add this to my.cnf:
innodb_force_recovery = 4
This forced MySQL to ignore all innodb errors and I used mysqldump to extract all the data from the innodb tables. Innodb tables were found using the following query:
SELECT table_schema, table_name
WHERE engine = ‘innodb’;
I stopped MySQL server again, removed the innodb_force_recovery, deleted the ibdata1 file and tuned innodb. I also made sure I added this to my.cnf:
innodb_file_per_table = 1
innodb_log_files_in_group = 2
All tables were loaded from the mysqldump backup files and everything is all happy again.
When trying to build an rpm for apr-utils on my CentOS 6.2 box I got a nasty error when the rpm was running test:
testmemcache : |/bin/sh: line 2: 14322 Segmentation fault LD_LIBRARY_PATH=”`echo “../crypto/.libs:../dbm/.lib s:../dbd/.libs:../ldap/.libs:$LD_LIBRARY_PATH” | sed -e ‘s/::*$//’`” ./$prog
Programs failed: testall
make: *** [check] Error 139
+ exit 1
error: Bad exit status from /var/tmp/rpm-tmp.OQddG8 (%check)
This relates to this bug: https://issues.apache.org/bugzilla/show_bug.cgi?id=52705
Thanks to Peter Poeml for releasing a patch for this, which I’ve put into an updated apr-util.spec
I have been building a lot of custom RPMs lately and I found this great resource which lists all of the macros that can be used in the spec files and what they equate to.
I recently changed to using the unix command line for cvs and changed all my cvs roots to :ext: instead of :ssh: (tortoise prefers ssh).
When I made the change, anytime I updated cvs I got this error:
No such file or directory
This makes no sense. Luckily, after searching around I found this is a problem with DOS line breaks screwing with unix cvs. Running the following fixes the problem:
dos2unix `find . -name Root`
dos2unix `find . -name Entries`
dos2unix `find . -name Repository`
It’s interesting that not only does netsol not list any information what-so-ever on their knowledge base about DNSSEC, their support staff have no idea what it is either.
Come on network solutions, sort it out otherwise I may have to move my domains to a registrar that does support DNSSEC.
I had a problem where df and du disagreed with the amount of disk usage. The cause was processes holding on to unlinked files. Running the following identified the processes:
ls -ld /proc/*/fd/* 2>&1 | fgrep ‘(deleted)’
I killed the processes and df is now showing the correct information.