This is part 2 (of 2, it seems) of the series of posts about packaging up an application to deploy. Toward "sane" operations. It's pretty hairy in there, so don't go in unless you want to see a lot of code.
So I saw an interesting conversation today on Twitter between @DHH and @thijs this morning. They were discussing the NewRelic IPO and the subject got around to running ancient Rails versions, and the fact that a lot of the migration projects from Rails 2.3 have been driven in large part because of the movement of the surrounding ecosystem (read, gem updates).
This is a work in progress... After @sigil66's tech talk at Boundary, I've been inspired to try to understand packaging and how to use packaging to deploy my software.
This is an application written in JRuby running on the Vert.X platform - so this will be an exploration around deploying a Vert.X application (which there is not a ton of documentation on, in any case...)
I've done some mining of Dogecoin and a few altcoins, and owned a tiny bit of bitcoin, but never really much. I'd run the wallet applications for the altcoins I'd used in the past and it was annoying but fine. Most of my bitcoin transactions have been on hosted wallets or signed using a command line client.
I've often seen people complaining about using the bitcoin wallets on their computers for a number of reasons, but hadn't experienced the pain.
I recently met with Trace Mayer, and he was raving about his Armory product. I recently downloaded it to see how it would do. First impressions were "meh". I imported a few vanity addresses, backed them up to some encrypted USB drives, etc, and then thought I'd start working with them.
Then I see that Armory doesn't run the node software itself, it will find a locally running Bitcoin-QT instance. "Ugh", I thought, "I'll finally get Bitcoin-QT up and running. How bad can it be?"
Yesterday morning I found something describing how I can use BitTorrent to download a copy of the 20 gigabyte blockchain. So, ok, did that. That took me about that whole day. And now my laptop has 20 gigs less space available. That's 20% fewer baby pictures my laptop can have so that I can, "Have my own money on my computer." Given that I've paid a lot more money than I own in bitcoin to recover baby pictures off failed hard drives, this is not a winning proposition at the outset.
Then, 20 GB file downloaded and placed, I open the client - expecting it to import the wallet in a few seconds. I mean, shit, I'd just spent 9 hours downloading a 20gb file.
9 hours of terrifyingly-hot-laptop-operation later and the client has ALMOST finished importing that file.
What the heck, people
This is a huge opportunity. Digital currencies can change a lot of things about the world. You've had 5 years and over a year of a billion dollar market cap - that means lots of people have tens (maybe hundreds) of millions of dollars worth of bitcoin at their disposal and a really quite liquid market. And this is the garbage you want people to use?
You think that your bitcoins will become worth a million dollars each with the software being this steaming pile of shit?
I guess it's just more of an announcement that there is a TON of work to be done. I just wish there were people with the foresight to fund it with some newly-minted wealth...
I'm building a new project in Vert.x and it's amazing to be able to separate concerns well. However, I "miss" having a Rails project to fall back on. Here I'll talk a bit about Vert.x and exploring using the ReactJS framework to build the UI.
Like most other "software writers" who primarily write Rails code I've met with, I don't have any formal CS training - or any formal training at all, really. I just started learning how to build things, which was hard and horrible in PHP and spectacularly easy in Rails. Love it or hate it, it's SO EASY to build systems in Rails when you have no idea what you're doing. I'll always argue that, and think it's amazing.
That said, I never learned SQL well...
To enable Docker containers to connect to the external network in a RackConnected environment,
touch /etc/rackconnect-allow-custom-iptables. Otherwise RackConnect will destroy all Docker's forwarding rules.
So I just got bitten by a little issue using Docker & RackConnect. I had a nice fresh server brought up with lots of containers running and everything was grand. Ish.
I found two different problems, both with the same solution. On the one hand, I would start a service container (RabbitMQ), and then link to it with a "web" container, and then go to my RackConnect config and open a port to test connecting to the "web" container (that would have a port number like 49156 or whatever), but then all of a sudden that container wouldn't be able to access the service container.
Also, from within any container, I couldn't access the network.
curl google.com or
ping 188.8.131.52 were just timing out.
I couldn't find a direct answer to this question, but it turns out that docker writes a ton of iptables FOWARD entries during the course of doing its thing, and those periodically get clobbered by RackConnect when it runs.
This article tells how to let RackConnect know to leave your custom rules alone - it does a merge rather than a force-overwrite. So touch that file, restart Docker, and be on your way!
Docker and Log aggregation. Not a solved problem. But I think the strategy I'm going to adopt for my current venture is having all of my containers log everything possible to STDOUT, and then just my log collection agents ingest the
docker logs of each container. That way I can keep my containers simple (a single process without cheats like supervisord), and the actual operations also remains simple.