Blog

  • Tweets You’ll Most Likely Read at SxSW 2010

    Here’s a list of tweets you’ll most likely read at SxSW 2010:

    • OMFG, landed! (Ya, you and 12,000 other people.)
    • OMFG, I’m @crowdedplace hanging out with my new BFFs @socialmediadouche1 and @socialmediadouche2.
    • I’m sick of folks tweeting who they’ll be hanging out with. I don’t care. Turning the twitter fire hose off.
    • OMFG, I’m just telling people who I’m hanging out with because my boss needs to know.
    • My hotel room is #69. Just bring yourself.
    • OMFG, That DM you sent?!!! That actually went public.
    • Big line in front of @crowdedplace.
    • No line @emptyplace.

    I think SxSW Interactive is an awesome event. It’s one of those few tech conferences where if you make a friend there they stay your IRL friend for quite a long time.

    Also it’s a great place for making complex deals in a really easy way. Think of the pieces that it takes to launch a major Internet app with mobile, web and video pieces. You can get those stake holders in one place at SxSW, and hammer out a huge deal with 2 days of face time.

  • Install Script For Rails on Debian

    The following works great on Rackspace’s Debian Virtual Servers and within 5 minutes you got a running rails instance.

    #!/bin/bash

    apt-get update -y
    apt-get upgrade -y
    apt-get install dlocate -y
    apt-get install build-essential libssl-dev libreadline5-dev zlib1g-dev -y
    apt-get install sqlite3 -y
    cd /usr/local/src
    wget ftp://ftp.ruby-lang.org/pub/ruby/stable-snapshot.tar.gz
    tar zxvf stable-snapshot.tar.gz
    cd ruby
    ./configure && make && make install
    ruby -v
    ruby -ropenssl -rzlib -rreadline -e “puts :Hello”
    cd /usr/local/src
    wget http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz
    tar zxvf rubygems-1.3.5.tgz
    cd rubygems-1.3.5
    ruby setup.rb
    gem install rails
    apt-get install mysql-server mysql-client -y
    apt-get install libmysql-ruby libmysqlclient15-dev -y
    gem install mysql — –with-mysql-include=/usr/include –with-mysql-lib=/usr/lib
    gem install mongrel –include-dependencies
    apt-get install git -y

  • Tweets I Liked

    Wow, I’ve just finished some work for a creative technology agency I work for, and am taking a break.

    Today was a pretty awesome day in tweeting.

    Here are tweets I liked:

  • Survival Books for 2010

    At a San Francisco ad agency, I’ve seen an office of well over 600 people in December of 2008 shrink to an office of less than 140.

    I got to visit the office on Thursday and it felt like a beehive where the queen bee has left and only hangers on are busy milling about.

    How did I survive? (Just to disclose: I am no longer part of the Ad Agency and have found work at a Creative Technology Agency.)

    I was the last hire at the SF office in December 2008. It wasn’t until an ex-MySpace executive got hired in November 2009 that the hiring freeze ended. However, there were still additional cuts. I was invited to a lunch where I was told that we were the last folks remaining in December and that there would be no more cuts.

    Here’s what got me through that rough year:

    How to Survive in an Organization
    by James Heaphey

    This is a pretty awesome book that gives a realistic look into how normal people act when they get into organizations of a certain size. This is the kind of book that will wake you up to the aggressive and covert competition that your office mates engage in to get ahead. I didn’t read this book to get ahead though. Historical forces demanded something more basic: survival.

    Here are some helpful hints the book gives:

    • find a mentor right away to help you get things done in your organization
    • avoid routine work because routines can be automated and don’t show growth
    • increase the organization’s dependence on your talents
    • gain a reputation for being tough but fair

    I think one of my biggest challenges is the last item and its the sort of thing you learn through a mentor.

    On the off chance things went really bad and I found myself on the street, I got this book:

    How to Stay Alive in the Woods

    This is a no non-sense guide to do exactly what you need to do to survive. I’ve found greens and picked berries in the woods, and also made a lean-to. The book has a waterproof cover and the pages can survive a rain and dry out without getting brittle.

  • Market Prediction means AI

    A brilliant insight just came to me:

    Achieving true market prediction means having the capacity to create and predict the behavior of an artificial intelligence.

    Here are the reasons why:
    1. The description of AI from Caprica suggests consciousness can be built given a sufficient amount of recorded on-line activity.
    2. If you know how the parts of a market works, you can know how the whole works.
    3. A person is a part of the market.
    4. How a person interacts on-line is part of a market, e.g. people build and use a system for products use.
    5. An AI can imitate on-line market behavior such that it’s indistinguishable from a person.
    6. If the program allows for predictive behavior, then you can predict a part of market behavior.
    7. Since you can predict the part, you can predict the whole.
    8. Therefore, in order to predict how the market will work, you need an AI.

    This implies that the more AI-like or better your modeling, the more you can predict how the market will behave.

    If AI proves to be un-predictive behavior, then perfect market prediction is impossible. (It might just be quantum in nature.)

    If AI proves to be impossible, then perfect market prediction is impossible. (See the Chinese Room Argument.)

  • Where are the people that hack together in meatspace?

    I’ve got a flu and am hopped up on ibuprofen and Nyquil.

    This year I’ve been telecommuting and co-working. I’ve made a few friends, but we don’t hang out much. I’ve actually been blown off by a few people to, but you know what? You’re hardly worth the thought.

    This rant is addressed to those of you I haven’t met or haven’t hung out with this year, but I truly feel you can do something awesome for the tech scene that’s more than just about your career. Actually if we do this shit that I’ll mention later, you’ll see that it’ll enhance our careers.

    I’ve seen you folks in cafes: the guy with the latest Apple laptop tailing server logs with an ev-do card, or the woman compiling drizzle on some beater Lenovo laptop converted to Linux. I’ve seen the creepy and utterly lame pick-up that you engineer dudes do at co-working places. Life’s more than eating where you shit.

    Next year, let’s do something awesome. Let’s fucking hang out in meatspace and build something awesome. Let’s have awesome discussions and turn a particular cafe as the place to talk tech.

    Let’s meet on some night during the week and actually build and learn shit, and actually help each other. San Francisco has about 17,000 people per square mile in some places. Why aren’t these folks hanging out and making there lives more awesome. Insert Matrix Quote Here.

    I know some of you hackers are one pay check away from disaster (if you’re not already there), or some of you are doing okay. Let’s all combine forces and create an awesome network and see if we can actually build something.

    You’re the kind of person who knows there’s gotta be better than Facebook or Google. Technology wasn’t meant to pigeonhole and objectify people as consumers but in some weird way liberate their human potential. Ya, it’s pretty hard to buy this BS given what a rough year it was, but if you are reading this here’s my proposal to you:

    We meet each week at some common space and work on technology together.

    This might sound too simple, but ask yourself this question: What community do you belong to?

    Having a hard time answering? Working on a tech project? Then the community I’m proposing might be the one for you.

    Maybe there’s already a group out there. If you’re out there, I’d like to talk to you. We have to stop isolating ourselves and unite in a really powerful way.

  • WP Geo Plugin

    The print_GeoCache_Url function came across my email today from a self-described local designer and geek, but after a little research, I found out it only works up to WordPress 1.2 . Thank goodness for the WordPress WP-Geo Plug-in which I’m using right now.

    More info here: WPGeo.com

    [wp_geo_map]

  • EC2 Backup Script

    This is a quick and dirty EC2 backup script for virtual unix servers that works just fine when crontabbed:

    #!/bin/bash

    DATE=`date +%m%d%Y-%H%m%M`
    BUCKET=”codebelay-$DATE”
    PRIVATE_KEY=’pk-codebelay.pem’
    PRIVATE_CERT=’cert-codebelay.pem’
    USERID=’555555555555′
    AWS_ACCESS_ID=’AKIA0000000000000′
    AWS_SECRET=’asdf+asdf+asdf+asdf’

    s3cmd mb s3://$BUCKET

    cd /mnt
    mkdir img
    ec2-bundle-vol -d /mnt/img -k /mnt/$PRIVATE_KEY -c /mnt/$PRIVATE_CERT -u $USERID -s 9999 –arch i386
    cd /dev
    mkdir loop
    cd loop
    mknod 0 b 7 0

    ec2-upload-bundle -b $BUCKET -m /mnt/img/image.manifest.xml -a $AWS_ACCESS_ID -s $AWS_SECRET

    # rm -rf /mnt/img
    echo “please register $BUCKET/image.manifest.xml” >> /mnt/registerbackups.txt

  • Notes on adding more MySQL databases

    Just notes for myself on adding more MySQL databases without shutting down the master database.

    on existing slave:

    /etc/init.d/mysqld stop

    copy data dir from /var/lib/mysql and data from /var/run/mysqld to new slave database:

    cd /var/lib
    tar cvf Mysql_slave.tar mysql/*
    scp Mysql_slave.tar root@new-db.com:/var/lib/.
    cd /var/run
    tar cvf Mysqld_slave.tar mysqld/*
    scp Mysqld_slave.tar mysqld/*
    scp Mysqld_slave.tar root@new-db.com:/var/run/.

    copy /etc/my.cnf from old slave to new slave
    add entry for new server-id

    start existing slave:

    cd /var/lib
    tar xvf Mysql_slave.tar
    cd /var/run
    tar xvf Mysqld_slave.tar
    /etc/init.d/mysqld start

    start new slave:

    /etc/init.d/mysqld start
    mysql
    start slave;

    on masterdb:
    e.g.:

    grant replication slave on *.* to ‘repl’@’192.168.107.33’ identified by ‘password’;

    test on master:
    create database repl;

    check on slave:
    show databases; /* should show new database */

    test on master:
    drop database repl;

    check on slave:
    show databases; /* new database should be dropped */

    Now it’s time to turn this into an automated shell script with Expect in there.

  • Part II: Getting to 600 Concurrent Users

    I couldn’t sleep last night. I’m worried we’ll lose this client.

    So just to be clear. I wasn’t part of the crew responsible for scaling this site. I had already set up a scalable architecture for the site, that would automatically and horizontally scale at Amazon. That idea got shot down for legal reasons that to my surprise haven’t been in play for awhile. Can we say, “Office politics?”

    I totally recommend Amazon’s Autoscaling to anybody that’s new to this.

    Instead of auto-scaling, the site was architected by a local San Francisco firm who I won’t mention here.

    Let’s just hope enough people read this so that they won’t even have to know the name of the company and will just know the smell of an un-scaleable architecture.

    Scalability requirement: 100,000 concurrent users

    This is how they set it up:

    • two web servers
    • one database
    • four video transcoders that hits the master database
    • one more app server that hits the master database
    • no slave db 😀

    If they had even googled ‘building scalable websites’ they would have come across a book that would have avoided all of this, Cal Henderson’s Building Scalable Websites. It should be mandatory reading for anybody working on a large website, and it just scratches the surface.

    So, how did we get to 600 concurrent users?

    We tweaked mysql by putting this in /etc/m.cnf:

    [mysqld]
    max_connections=10000
    query_cache_size=50000000
    thread_cache_size=16
    thread_concurrency=16 # only works on Solaris and is ignored on other OSes

    We ran siege and were able to get to about 300 concurrent users without breaking a sweat, but now apache was dying.

    So we tweaked apache. We started out with this:

    StartServers 8
    MinSpareServers 5
    MaxSpareServers 20
    ServerLimit 256
    MaxClients 256
    MaxRequestsPerChild 4000

    And ended up with this:

    StartServers 150
    MinSpareServers 50
    MaxSpareServers 200
    ServerLimit 256
    MaxClients 256
    MaxRequestsPerChild 4000

    RAM and CPU were doubled.