I've done some mining of Dogecoin and a few altcoins, and owned a tiny bit of bitcoin, but never really much. I'd run the wallet applications for the altcoins I'd used in the past and it was annoying but fine. Most of my bitcoin transactions have been on hosted wallets or signed using a command line client.
I've often seen people complaining about using the bitcoin wallets on their computers for a number of reasons, but hadn't experienced the pain.
I recently met with Trace Mayer, and he was raving about his Armory product. I recently downloaded it to see how it would do. First impressions were "meh". I imported a few vanity addresses, backed them up to some encrypted USB drives, etc, and then thought I'd start working with them.
Then I see that Armory doesn't run the node software itself, it will find a locally running Bitcoin-QT instance. "Ugh", I thought, "I'll finally get Bitcoin-QT up and running. How bad can it be?"
Yesterday morning I found something describing how I can use BitTorrent to download a copy of the 20 gigabyte blockchain. So, ok, did that. That took me about that whole day. And now my laptop has 20 gigs less space available. That's 20% fewer baby pictures my laptop can have so that I can, "Have my own money on my computer." Given that I've paid a lot more money than I own in bitcoin to recover baby pictures off failed hard drives, this is not a winning proposition at the outset.
Then, 20 GB file downloaded and placed, I open the client - expecting it to import the wallet in a few seconds. I mean, shit, I'd just spent 9 hours downloading a 20gb file.
9 hours of terrifyingly-hot-laptop-operation later and the client has ALMOST finished importing that file.
What the heck, people
This is a huge opportunity. Digital currencies can change a lot of things about the world. You've had 5 years and over a year of a billion dollar market cap - that means lots of people have tens (maybe hundreds) of millions of dollars worth of bitcoin at their disposal and a really quite liquid market. And this is the garbage you want people to use?
You think that your bitcoins will become worth a million dollars each with the software being this steaming pile of shit?
I guess it's just more of an announcement that there is a TON of work to be done. I just wish there were people with the foresight to fund it with some newly-minted wealth...
Like most other "software writers" who primarily write Rails code I've met with, I don't have any formal CS training - or any formal training at all, really. I just started learning how to build things, which was hard and horrible in PHP and spectacularly easy in Rails. Love it or hate it, it's SO EASY to build systems in Rails when you have no idea what you're doing. I'll always argue that, and think it's amazing.
That said, I never learned SQL well...
I'm building a new project in Vert.x and it's amazing to be able to separate concerns well. However, I "miss" having a Rails project to fall back on. Here I'll talk a bit about Vert.x and exploring using the ReactJS framework to build the UI.
To enable Docker containers to connect to the external network in a RackConnected environment,
touch /etc/rackconnect-allow-custom-iptables. Otherwise RackConnect will destroy all Docker's forwarding rules.
So I just got bitten by a little issue using Docker & RackConnect. I had a nice fresh server brought up with lots of containers running and everything was grand. Ish.
I found two different problems, both with the same solution. On the one hand, I would start a service container (RabbitMQ), and then link to it with a "web" container, and then go to my RackConnect config and open a port to test connecting to the "web" container (that would have a port number like 49156 or whatever), but then all of a sudden that container wouldn't be able to access the service container.
Also, from within any container, I couldn't access the network.
curl google.com or
ping 22.214.171.124 were just timing out.
I couldn't find a direct answer to this question, but it turns out that docker writes a ton of iptables FOWARD entries during the course of doing its thing, and those periodically get clobbered by RackConnect when it runs.
This article tells how to let RackConnect know to leave your custom rules alone - it does a merge rather than a force-overwrite. So touch that file, restart Docker, and be on your way!
Docker and Log aggregation. Not a solved problem. But I think the strategy I'm going to adopt for my current venture is having all of my containers log everything possible to STDOUT, and then just my log collection agents ingest the
docker logs of each container. That way I can keep my containers simple (a single process without cheats like supervisord), and the actual operations also remains simple.
This is a slightly beefed up version of the lightning talk I gave at Austin on Rails last night - Feb 25, 2014.
I had intended to give this talk on things I've learn about infrastructure as I've been able to help stabilize and grow at a my last two jobs. I've had a lot of experience working on deployment infrastructure, from Dreamhost, to Rackspace, to Heroku, AWS, and now am mananging our infrastructure on Amazon OpsWorks using Chef (which I've been writing about recently).
However, as I thought about it, the problems that have been hindering our growth have not been as much server infrastructure related, as much as they were visibility-related...
I've written before about how OpsWorks kind of pushes you into a weird architecture because of their default behaviors for applications in a stack.
Namely, the system tries to push all the applications on all of the layers, and chaos ensues.
But, it turns out I needed to finally move our various apps into one stack, and after a few days of poking, prodding, waaaaaaaaaaiting for machines, I got it working, so I thought I'd document it.
I'm in the middle of changing our application architecture at work from being a bunch of OpsWorks stacks into one large stack - so that I can tell my individual application nodes what's going on where. I ran into one pretty big hiccup while porting my stack from "toy" that I got by following this excellent walkthrough.
Basically the demo stack uses public git repos - no custom cookbooks or private app code. That clearly doesn't work in the real world, though. We have a large set of custom cookbooks that we're working on supplementing so I need to pull those down. To do so, not only did I need to change the stack settings to point to the repo using the git@github... address (instead of git://github...).
But I also had to go to the NAT security group that I had set up using the CloudFormation and add port 22 to one that was allowing inbound and outbound connections. The defaults allow port 9418 (the git protocol), but that protocol does not allow you to use our deploy key for authentication:
So if you are having problems setting up your OpsWorks using custom cookbooks inside a VPC, then make sure you have port 22 forwarded on your NAT.
So much to learn...