posted on June 24, 2013
In the echo chamber that is the startup community, the talk surrounding Edward Snowden and the surveillance state he revealed seems deafening. Deservedly so – the story bears repeating and demands continued attention to hold our government accountable and to prevent this story from becoming a byline once the next natural disaster or other major world event hits the news.
Yet the community of those concerned about the NSA’s behavior being revealed by Snowden is comparatively tiny next to the behemoth of U.S. citizens who couldn’t care less about how much the government knows about them, even if the protection given by such information gathering is marginal and unprovable. According to them, they have nothing to hide.
Why is this so? Are people uneducated? Apathetic? Complacent?
Does the concern over the government’s involvement only begin to take hold when they, or perhaps a loved one, is locked up on unstated charges for an indefinite amount of time with information gathered without a targeted warrant?
Does the mass gathering of information become relevant only when you personally are affected by it, and remain an abstract concept until that point?
Does a healthy skepticism of the government not exist so long as gas remains inexpensive, the paychecks keep coming, and American Idol’s still on every night?
We need people like Snowden, like Assange, like Ellsberg. We need people to read the news or turn on the TV, to hear about them, and to be angered by the details of what they reveal.
We need to drop the veil of complacency, because complacency breeds political indifference, and with indifference comes erosion of civil liberties.
And it’s our job, you and me and everyone who reads about the NSA’s actions and feels wronged and angry, to make sure that “nothing to hide” stops being an excuse. We need to show people the consequences of the erosion of civil liberties. We need the average Joe to be just as angry as we are.
We did it with SOPA. We can do it with the 4th.
posted on March 31, 2012
It didn’t begin as one.
It was April 2009, my second trip to Mexico. Back in the day I would freelance, and at the time I had a client who wanted me to develop a data aggregator for Craigslist. Coincidentally, I was finishing up the project right around the same time I went on vacation. My thought process was that it’d be a simple
scp over to the client’s box and I’d be on my way, but never before had I met IIS. A good third of my vacation was spent on getting that Perl app to run on a Windows box.
There’s got to be a better way, right? This was before the PaaS boom, Heroku was the only option out there and it didn’t support anything but Ruby. So, come summer 2010, I wrote my own, for Perl. I called it Phenona.
It took a few months of learning about network topologies, Redis, ØMQ, LXC, redundancy, distributed systems, et. al before I was ready to jump in. And only halfway through coding did I realize that this could be of some utility to others. So I scrapped and rewrote it with others in mind. No billing system even in sight (a la 37signals) but in December of that year, I was ready with what one could call a private beta. Given my $20/month budget (my allowance at the time), the idea was to let a tiny bit of people in to give feedback, then hopefully jump straight to a launch.
The response surprised me. I’d devised a registration form that would ask each prospective user a wide array of questions so that I could get a better perspective of the market. Yet even with the additional friction, 10 users registered within the first week. Then another 10. Then 50. Then 100. Each day I would log on to MailChimp (fantastic service) and read the comments of the day and would be surprised again and again about the various backgrounds people were coming from and the ideas they had for improving Phenona. In the meantime, I was navigating the innards of the CPAN build process, getting a client library out the door and iterating, iterating, iterating on the server-side.
One day, I got an email from ActiveState. They were getting into the cloud business and wanted to talk about Phenona. Phenona was written in Perl, for Perl apps, and ActiveState is widely known for the excellent ActivePerl, so it was a natural fit. For obvious reasons, I can’t go into detail about the months that followed, but it was quite the rollercoaster ride. I learned the concepts of “due diligence”, “indemnity”, contracts, lawyers and more lawyers, and even family law in Washington state (I am an emancipated minor). ActiveState was fantastic to work with through it all; they’re seriously the nicest people in the business.
Come June 14th (2011), it was go time: announcement day. A regular school day, I might add. I got up at 5am to be able to push the blog post to the Phenona blog in time and send the tweets out, and headed off to school.
The Register was the first. Then Geekwire. But aside from that, the 14th was quiet. So was the 15th. But on the third day, something happened, and suddenly my inbox was full. The few weeks that followed were insane. An interview for KOMO (local news station in Seattle), a video interview for national TV in Russia, the GeekWire podcast, Skype interviews for various bloggers, dozens of email interviews. A news outlet had emailed the principal of my school to get a quote, and the principal had forwarded the email to all my teachers. It started out as flattering but quickly progressed to exhausting, to the point where the red (1) on the Mail icon in the OS X dock would cause an involuntary sigh.
But it was worth it, many times over.
Over the past year, I’ve met more fantastic people than I can possibly count. The support and encouragement from the crowd has been much more than I could’ve possible asked for when I committed the first line of Phenona’s code years ago.
So where have I been since June? Working behind the scenes on Stackato, which is in many ways a continuation of the Phenona idea: frictionless deployment to the cloud, as widely accessible as possible.
And of course, in my spare time, I’ve been thinking about the next big idea. An entrepreneur’s spirit is a crazy thing.
Hacker News discussion here.
posted on October 17, 2011
(Note: this is mirrored from the Phenona blog; Phenona was acquired by ActiveState, so the original post is no longer accessible. Update 10/27: now also available at the ActiveState blog.)
Update 6/2014: Just use Docker. (http://www.docker.com/)
EC2 is already a (para)virtualized environment, which means you can’t run your own virtualization (KVM/VirtualBox/qemu). As an alternative, Linux recently introduced a new system into the kernel, called
cgroups, which provides a way to isolate process groups from each other in the kernel. A project was soon formed around this new technology, which allows for very thin, fast, and secure quasi-virtualization. It’s called LXC. And it works in EC2 perfectly.
You’ll want a recent Linux AMI (preferrably kernel 2.6.35 or higher). I use Ubuntu 11.10, and the following instructions are meant for that OS. Can’t vouch for other distros, but the instructions should be easily portable. Ubuntu’s excellent for the LXC + EC2 combination because they already have pre-made AMI images, the kernel supports LXC out of the box, and they have software repositories hosted in the EC2 cloud, which makes for extremely fast system updates. Also, any instance type works, even a
t1.micro will suffice (my weapon of choice for testing purposes).
Start by SSH-ing into your EC2 server. You’ll need to run almost all of the following instructions as root, so let’s do:
to become root. Otherwise, you can prepend ‘sudo’ to the beginning of every command from now on (unless specified otherwise).
Now, we need to install a few packages:
apt-get update && apt-get install lxc debootstrap bridge-utils dnsmasq
lxc-checkconfig and make sure that the tests pass (all of them should if you’re using the AMI).
Keep in mind that the effects of most of the commands from here on out (specifically iptables, sysctl, mount, brctl and any edits to /etc/resolv.conf) will not persist over a reboot, even on a EBS-backed instance. These are in-memory changes which will go away as soon as you shut down the machine. If you bring the instance back up, you’ll need to run them again, otherwise things will be broken. There are several ways to get around this: iptables rules and /etc/resolv.conf can be set by an init script, sysctl can be set in sysctl.conf, mounts can be specified in /etc/fstab, and brctl can be set in /etc/network/interfaces (add the br0 interface); however, for the purposes of this guide (I don’t use EBS-backed instances, personally), we’ll assume instance storage (config is lost on reboot).
We’ll need to create a place on the system to hold cgroup information (required for LXC to work). I use /cgroup. Let’s mount a cgroup environment there.
mount -t cgroup none /cgroup
Now, let’s create a network bridge for the containers to be able to connect to the network/Internet. Simply run:
brctl addbr br0
brctl setfd br0 0
ifconfig br0 192.168.3.1 up
Now we need to set up a few system rules for the containers to be able to reach the Internet:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sysctl -w net.ipv4.ip_forward=1
Let’s set up DHCP/DNS on our new bridge. Open up
/etc/dnsmasq.conf for editing (vim/nano/ed/cat, your choice). Uncomment the necessary lines so that the conf file looks like the following:
interface = br0
listen-address = 127.0.0.1
listen-address = 192.168.3.1
domain = containers
dhcp-range = 192.168.3.50,192.168.3.200,1h
Now, you’ll need to edit
/etc/dhcp/dhclient.conf for DNS to properly resolve locally. Add the following lines to the beginning:
prepend domain-name-servers 127.0.0.1;
prepend domain-search "containers.";
(Don’t forget the dot after
containers, that’s not a typo!)
Now we need to renew our DHCP lease so that dhclient will regenerate /etc/resolv.conf.
dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp3/dhclient.eth0.leases eth0
Now, let’s restart dnsmasq so it’ll re-read the new configuration.
service dnsmasq restart
Next, we need to create the environment inside the container. There’s a script that comes with lxc called lxc-ubuntu, which will set up the container. However, it’ll require a bit of tweaking for the environment to work. I’ve done the tweaking for you, and put the new script up, so simply run: (updated 4/30/11 for Ubuntu Server 11.04)
wget -O lxc-ubuntu http://bit.ly/ec2ubuntulxc
chmod +x lxc-ubuntu
Now, let’s create a new container:
./lxc-ubuntu -p /mnt/vm0 -n vm0
Wait a while for the script to finish, and your container is set up in /mnt/vm0. Let’s try it out!
lxc-start -n vm0
Type in root for the username and root for the password. Try pinging Google:
If it works, your Internet is set up! Now let’s try another thing (make sure you run this from the VM, not from the host!!):
poweroff (this shuts down the VM, and puts you in the host again)
lxc-start -n vm0 -d (this runs the VM in daemon mode)
To check if a VM is running, type:
lxc-info -n vm0
(it should say RUNNING). To test network, try pinging the VM (this might not work right away, you might have to wait up to a minute):
If those two work, the VM is now in your DNS and you can address it by its hostname. Cool, huh?
Creating a new VM
Creating another VM is as simple as:
./lxc-ubuntu -n vm1 -p /mnt/vm1
The packages won’t be redownloaded, and the command should complete quickly.
Clone existing VM
If you want to clone your existing VM, you’ll need to do a few things:
cp -r /mnt/vm0 /mnt/vm1
Now edit /mnt/vm1/config and replace all references of vm0 to vm1. Do the same with /mnt/vm1/fstab. Then go into /mnt/vm1/rootfs/etc/hostname and replace the hostname with vm1. Finally, run:
lxc-create -n vm1 -f /mnt/vm1/config
Upon starting the VM, you should be able to ping it/ssh to it:
If not, lxc-console into the VM and check your connection. Keep in mind you only need one
br0 for all your instances, but you can create many, if you so desire.
Running services inside the container
At Phenona, we run Perl web servers and the like inside these containers. You may want them to be accessible from outside the VM (from the rest of EC2, or outside EC2). To do this, you’ll need to port forward from the host to the VM. Simply run:
iptables -t nat -A PREROUTING -p tcp --dport -j DNAT --to-destination :
Hibernating a container
To ‘hibernate’ a container (save the current state (running processes) of the VM for instant restoring later) do:
lxc-freeze -n vm0
lxc-unfreeze -n vm0
Installing additional packages into the container
Your container is just like any other Ubuntu system. Therefore,
Setting resource limits
One of the benefits of LXC is that you can limit resource usage per-container. Let’s delve into the various resources you can limit:
There’s two ways of limiting CPU in LXC. On a multi-core system, you can assign different CPUs to different containers, as such (add this line to your container config file, /mnt/vm0/config or similar):
lxc.cgroup.cpuset.cpus = 0 (assigns the first CPU to the container)
lxc.cgroup.cpuset.cpus = 0,2,3 (assigns the first, third, and fourth CPU to the container)
The alternative (this one makes more sense to me) is to use the scheduler. You can use values to say ‘I want this container to get 3 times the CPU of this container’. For example, add:
lxc.cgroup.cpu.shares = 2048
to the config to give a container double the default (1024).
To limit RAM, simply set:
lxc.cgroup.memory.limit<em>in</em>bytes = 256M
(replacing 256M with however much RAM you want to allow).
To limit swap, set:
lxc.cgroup.memory.memsw.limit<em>in</em>bytes = 1G
There’s no official way to do this, it’s up to you. You can use LVM (in EC2? Good luck.), or you can create a filesystem in a file (something like
dd if=/dev/zero of=somefile.img bs=4GB count=1 && mkfs.ext3 somefile.img && mount -o loop somefile.img /mnt/vm0/rootfs) and mount it to /mnt/vm0/rootfs to limit space.
To limit network bandwidth per container, do some reading on the
tc utility. Keep in mind you’ll need to use separate bridges (br0, br1) for each container if you go this route. Don’t forget to edit the config of each VM to match your new bridge if you do so.
Thanks to the Foaa and Mudy blogs for getting me started on my way towards a running LXC.
Some further reading: the main LXC site, the LXC HOWTO, and IBM’s tutorial.
NOTE: When following other guides on LXC, be very careful with messing with the network in the EC2 environment (restarting networking services or altering /etc/network/interfaces on the host) because one wrong command, and the connection will drop between you and your instance (you’ll lose SSH), and therefore lose your instance completely. I did that many, many times while exploring LXC. The instructions I’ve provided here have been tested and will not drop your EC2 connection, but I can’t vouch for other methods.