Developer Drain Brain

November 14, 2015

Manna from heaven

Filed under: Uncategorized — rcomian @ 5:51 pm

I’ve been a developer using the Microsoft stack (C++ with MFC, C#) since 1998. I’ve never professionally used any other stack for a significant length of time. Don’t get me wrong, I want to. I *really* want to.

I’m not a Microsoft (or apple) fan, and the recent ASP.NET releases have helped to codify why.

We’re waiting for the next version of ASP.NET. It’s going to be the greatest version ever, it’s pretty much copying node.js in its architecture, we’re getting cross platform support, large parts are being open sourced and it’s going to be amazing.

Only it isn’t. I’ve been watching a colleague use the current beta of vnext. Basically, it includes bower and the gulp build chain which works fantastically well – neither of these are microsoft technologies. Everything else has been painful. Worst of all, SignalR has been removed from the betas and is now unavailable in the stack. It won’t be back for the main release either.

Apparently, Microsoft think that a fully fledged, top of the range webstack doesn’t have to include push services.

That’s bad enough, but it’s the reason that this is bad that really gets me. I realised all us C# developers are sitting here waiting for “manna from heaven”, for the goods to just fall out of the sky and we’ll lap them up whatever they are. We can’t credibly start a new project on the old version of ASP.NET, knowing that the new version is right around the corner and that it’s what everyone is going to use regardless of whether that’s the right thing or not, we’re just going to use it because it’s what we’re told we should use. It’s what works with C# and .Net, the framework we’re paid to develop with.

I’m comparing this with *everything else* that’s out there. There have been many languages and environments for making websites: php, python, ruby. They’ve all come about organically from people who’ve needed something and taken the effort to create it. Node.js is the most recent manifestation of this.

I *like* Node.js, although I was sceptical of it for a very long time. But it works. And the way it works is interesting. I’m not choosing any particular framework when I choose node. I can choose express, or sails, or hapi or any of the others. Or none, I can do it all raw if I want to.

The “next” version of node is exciting, but when things start going wrong or getting stale, someone forks it and brings it back up to speed. This happened. It now merged back into a proper platform. No “manna from heaven”, more “this isn’t working, let’s do it ourselves”.

If the next version of express isn’t any good, people will just move to another framework, or keep using the old one, someone will probably even keep maintaining an old version if people use it.

I can’t see any way that will suddenly become incompatible with the next version of express. We’re not going to lose major features like this. That said, it doesn’t matter, we could still use socks.js, or raw websockets if we wanted. There’s thousands out there.

No one is telling me I must use any particular framework. Choice is its own burden for sure, but I can choose frameworks that fit my style, attitude and the problem at hand. When any of those change, we can move frameworks without ditching the whole stack.

It’s the same with linux. When I use windows, I’m choosing one set of apps. When I choose OSX, I’m choosing another. When I choose linux – well, I’m choosing another set of apps for sure. But there’s a difference. You can’t *choose linux*, you choose *a* linux. I choose gentoo, because it works how I think. You might choose ubuntu because you like how it works.

That’s great, we’ve both chosen a system that looks and works the way we think computers should function. Notice what we haven’t chosen tho – we haven’t chosen a different set of apps. Pretty much everything that’s written for ubuntu will work on gentoo and vice versa (support may be sketchy, that’s another problem tho).

My point is tho, everywhere else in the industry, things aren’t given to us from on high. Things are developed that solve a particular problem, if they solve your problem, you’re free to choose them, or not. If you don’t you don’t have to switch out the entire stack.

I can carry on using my same linux apps if went to ubuntu. Or fedora, or mint. I can carry on using if I switch away from express – or use it on its own. I can carry on using express if I choose not use anymore.

I don’t know if I’m being clear here. I think my point is very subtle. But I know what I’m not liking. I’m not liking waiting for Microsoft to pour Manna from heaven on me, and knowing that I’ll be forced to eat it up, regardless of what I actually think of it, because I know that everyone else is going to be using it as well and the industry will demand it of me regardless. And Microsoft *will* make it “good enough”, for sure. I just wish I could choose the new ASP.NET. Or choose not to do the new ASP.NET.


October 23, 2015

OOP 101 – Why separate Classes and Instances?

Filed under: Development — rcomian @ 7:15 pm

In the previous post we saw that we can define a class and then create an instance of that class to use it.

A serious question is why do we have to do this? Why instantiate a class?

The answer is that we can use multiple instances of a class at the same time. So we can write one class that represents one piece of functionality and re-use it multiple times.

Lets think of a concrete example: A button.

On any application on many pages you will often see multiple buttons. They all work roughly the same way, they draw a piece of text and a box around them, when you press on them they animate to look like they’ve been pressed and then something happens.

Software is all about working out what code can be re-used, and a button sounds like an ideal thing to reuse. We won’t go into a full button definition, but we could define a simple button to demonstrate our concepts.

Let’s first work out what we want our button to do for us, I’m going to make up some requirements:

  1. We need to be able to say what text should be on the button
  2. We need to be able to tell the button where on the screen it should be drawn using X & Y coordinates
  3. We need to tell the button to draw itself normally
  4. We need to tell the button when it has been clicked so that it can draw itself in the “pressed” state

class MyButton
  string ButtonText;

  int PositionX;
  int PositionY;

  void Draw() {

  void Click() {

We should be able to see how this class fits our requirements. First off, we have some data associated with the class. ButtonText is a member variable and will hold the text that gets displayed on the button. This lets us choose what text appears on the button, which is requirement 1.

Next we have 2 member variables: PositionX and PositionY. These will hold where on the screen we need to draw the button, which meets requirement 2.

Then we have the Draw method. This is empty in our example, but in real life it would use the ButtonText and the PositionX and PositionY data to draw the button with the correct label at the correct location on the screen. This would meet requirement 3.

Then finally we have the Click method. Again this is empty in our example, but it would draw the animation for when the button was clicked, making it look like it was pressed down and then pop out again. Again it would use the ButtonText and PositionX and PositionY data to do this, and meet requirement 4.

So as before, with just the code above, we haven’t actually done anything – we’ve just decalared a class. We don’t have any buttons, we need to make some, say what their text is and where they are then tell the code to draw them. So if we had a program that needed an “OK” and a “Cancel” button on there, we could use them like this:

MyButton okButton = new MyButton();
okButton.ButtonText = "OK";
okButton.PositionX = 500;
okButton.PositionY = 500;

MyButton cancelButton = new MyButton();
cancelButton.ButtonText = "Cancel";
cancelButton.PositionX = 600;
cancelButton.PositionY = 500;

So what we do is first create an instance of MyButton and store it in the okButton variable. Internally a piece of memory is reserved that is large enough to store the ButtonText and PositionX and PositionY member variables for this button and the location of this memory is given the name okButton in our code.

Very simply, we can say that things look like this:


Next we use the okButton variable and say “Inside this variable’s memory location that was reserved, set the ButtonText data to the string "OK"“.
Then we do the same for PositionX and PositionY, storing where we want this button to be drawn.

Updating the previous diagram with more detail, we get something like this:

Finally we say, “call the Draw method of the MyButton class, but whenever that method uses ButtonText or PositionX or PositionY, use the data in the memory reserved for the variable okButton“.
This is all said by the very much shorter syntax: okButton.Draw();

So at this point, we would have a button drawn on the screen saying “OK”. And we’ve got our variable okButton so that we can talk to this button later.

Next in the code we instantiate a second button. When we do this, we reserve a new, completely different area of memory, which again is big enough to store the data for the MyButton class. This memory is in a different location to the memory for the previous instance and given the name cancelButton in our code.
Once we’ve allocated, we then set the contents of that memory location appropriately, we set the string for the ButtonText and the PositionX and PositionY member variables in that memory location.

At this point, our program and its memory would look something like this:

We then call Draw again, which runs the Draw method in the MyButton class again. But this time, it uses the data stored in the cancelButton memory location to do its work, not the data for okButton. This lets us have 2 buttons on the screen which are completely independent.

So what about that Click method? We’d call this when we determined that a user had pressed the button somehow (I’m being very woolly on the details, every UI framework already has a set of classes for doing all this, and they’re all different). Basically, we’d keep hold of our button variables somewhere, and later on in the code, we’d call the Click method at the appropriate time, just like this:

... // Somehow determine that the user had clicked OK
... // Do whatever we need to do on OK

The would run the code we wrote in the Click method on the MyButton class. But it would do so using the data for the okButton. In this way, the okButton – and only the okButton – would show the animation of being clicked.

OOP 101 – Classes, Instances and Objects

Filed under: Development — rcomian @ 3:10 pm

For me, the most basic thing to understand when dealing with any kind of object oriented language is the relationship between classes and objects.

Lets go through the mechanics of what’s going on from a developer’s point of view.

If you’ve written any applications in java or C# you’ve certainly written some code that looks like this:

class MyClass()
  string MyMethod() {
    return "My string";

It doesn’t really matter what language you’re talking about here, the idea is the same in c++, java and c#. The exact syntax will vary, so don’t try to compile that code, but the overall idea is the same.

What we’ve done is define a class called MyClass. It contains a single method called MyMethod and that method just returns the string value "My string".

It’s worth knowing that on it’s own, writing, compiling and running that code won’t actually do anything at all. Not a thing. We’ve just set some stuff up.

To get it to do something you’ll need to instantiate that class. You will probably also have written something along the lines of this:

MyClass myObject = new MyClass();

Now we’ve done something a little more, we’ve taken the class we created before (MyClass) and created an object which is an instance of that class. We can refer to that object using the name myObject.
An object is an actual thing that lives in memory, it can have data associated with it and have a set of methods on it which we can call. Because that data and those methods are defined by a class, we call that object an instance of that class.
myObject is an instance of MyClass, so we have access to all the things that MyClass contained. That is, a class is just a definition, an object is a thing we can actually do things with.

So what can we do with an instance of MyClass? Because MyClass contains one method: MyMethod, we can call it:

MyClass myObject = new MyClass();
string returnedString = myObject.MyMethod();

This code creates an instance of MyClass, just like before, then calls the MyMethod method on that instance.
This method returns the string "My string", which itself gets stored in the string variable returnedString, which we can then use for other things.
So this code doesn’t do much, but serves as a starting point for understanding the difference between a class and an object.

September 25, 2013

Incrementally calculating Mean, Variance and Standard Deviation in T-SQL

Filed under: Uncategorized — rcomian @ 4:20 pm

So, today I had a performance issue. I found a piece of SQL that inserted a lot of data in to a table, one row at a time.
For each row, it inserted a random value into the table, then in to another table inserted the mean and standard deviation of all the values up to that point. In T-SQL the way to do this is quite straightforward:

INSERT INTO @value_table VALUES (@value)
INSERT INTO @stats_table VALUES ( SELECT SUM(val) / COUNT(val) AS mean, STDEV(val) as stdev FROM @value_table )

Of course, the problem with code like this is that calculating the statistics iterates over every existing value each time it’s called – and it’s called for every row that’s inserted.
For me, it started to creak after just a few hundred thousand rows – and my machine got very warm.

The way to solve this sort of problem is to calculate the stats incrementally – that is save the current state in variables and just add the next number in each time rather than going and recalculating the whole lot each time.

Incrementally calculating the mean is quite straightforward, we keep a @sum variable to which we add the @value and a @count variable which we increment by 1 each time.

This gives us something like the following:

INSERT INTO @value_table VALUES (@value)
SET @sum += @value
SET @count += 1
INSERT INTO @stats_table VALUES ( SELECT @sum / @count AS mean, STDEV(val) as stdev FROM @value_table )

That deals with the mean, so how to we calculate the standard deviation like this?
Well, first of all, I had to remind myself what the standard deviation was. Thanks to Standard Deviation and Variance page on Maths is Fun, I found that the standard deviation is just the square root of the variance.
Which is a good start. It means we can calculate the variance incrementally and then get the square root of that value as the variance.
But what’s a variance? High-school maths was a long time ago, but it is, of course, the sum of the square of differences between the value and the mean.
Well we know the mean for each point, but things are a little tricky since the mean changes for every data point we add in – and we need to calculate the difference between the each datapoint and the last mean value, which implies we need to go back to revisit each data point with the new mean each time and recalculate everything.

But there must be some hope, the new mean changes from the old mean in a measurable way, we must be able to take advantage of that to recalculate everything.

Fortunately, it turns out someone who actually knows math has already worked this out, thanks to the Maths Stack Exchange site, someone had already asked and answered that question.

Now, when I went to that link, I got a little scared. I really wasn’t sure what all those numbers actually represented, but if you read it carefully, you can parse it out.

But how do you go about coding this and checking that your calculation is correct?

Well, it took a little fiddling, but I finally managed to pull something together, so I give you – incrementally calculating the Mean and Standard Deviation in T-SQL and checking that the values are correct by also calculating it longhand:

DECLARE @generating TABLE (val REAL)

@vals_Count INT = 0,
@vals_Sum REAL = 0,
@vals_Mean REAL = 0,
@vals_Variance REAL = 0,
@vals_LastMean REAL = 0,
@vals_LastVariance REAL = 0,
@nextVal REAL = 0

— Define the values that we’re going to insert here
DECLARE @toinsert TABLE(id int identity(1,1), w REAL)
INSERT INTO @toinsert (w)

SELECT @i = 0, @max = max(id) FROM @toinsert

WHILE @i < @max

SET @i += 1
SELECT @nextVal = w FROM @toinsert WHERE id = @i

— Add this value to our table
INSERT INTO @generating VALUES(@nextVal)

— Save the last mean and variance values
SET @vals_LastMean = @vals_Mean
SET @vals_LastVariance = @vals_Variance

— Incrementally calculate the new mean
SET @vals_Count += 1
SET @vals_Sum += @nextVal
SET @vals_Mean = @vals_Sum / @vals_Count

— Incrementally calculate the new variance (if you do this when count = 1 you get a divide by zero error, probably because the concept is meaningless)
IF (@vals_Count > 1)
SET @vals_Variance = (((@vals_Count-2) * @vals_LastVariance) + ((@vals_Count-1) * POWER((@vals_LastMean – @vals_Mean), 2)) + (POWER(@nextVal – @vals_Mean, 2))) / (@vals_Count-1)
SELECT @vals_Variance = 0

— Select the
SELECT @vals_Count as id, @nextVal as inserted, SUM(val) / COUNT(val) as mean, STDEV(val) as stdev, @vals_Mean as ‘incMean’, SQRT(@vals_Variance) as incStdev FROM @generating


August 12, 2013

Oliver Hotham – It’s Great When You’re Straight… Yeah

Filed under: Uncategorized — rcomian @ 1:32 pm

This is a mirror of a mirror.

Reposted from:
More of the story here:

< Begin >
Oliver Hotham posted the following, and is being told to take it down by Straight Pride UK. Below is the full text from his blog. ETA: This text is from a document entitled ‘Press Release.’

AUGUST 3, 2013

There has never been a better time to be gay in this country. LGBTI people will soon enjoy full marriage equality,public acceptance of homosexuality is at an all time high, and generally a consensus has developed that it’s really not that big of a deal what consenting adults do in the privacy of their bedrooms. The debate on Gay Marriage in the House of Commons was marred by a few old reactionaries, true, but generally it’s become accepted that full rights for LGBTI people is inevitable and desirable. Thank God.

But some are deeply troubled by this unfaltering march toward common decency, and they call themselves the Straight Pride movement.

Determined to raise awareness of the “heterosexual part of our society”, Straight Pride believe that a militant gay lobby has hijacked the debate on sexuality in this country, and encourage their members, among other things, to “come out” as straight, posting on their Facebook page that:

“Coming out as Straight or heterosexual in todays politically correct world is an extremely challenging experience. It is often distressing and evokes emotions of fear, relief, pride and embarrassment.”

I asked them some questions.

First of all, what prompted you to set up Straight Pride UK?

Straight Pride is a small group of heterosexual individuals who joined together after seeing the rights of people who have opposing views to homosexuality trampled over and, quite frankly, oppressed.

With the current political situation in the United Kingdom with Gay Marriage passing, everyone is being forced to accept homosexuals, and other chosen lifestyles and behaviours, no matter their opposing views. Straight Pride has seen people sued, and businesses affected, all because the homosexual community do not like people having a view or opinion that differs from theirs.

Are your beliefs linked to religion? How many of you derive your views from scripture?

Straight Pride aims are neutral and we do not follow religion, but we do support people who are oppressed for being religious. Only today, Straight Pride see that two homosexual parents are planning to sue the Church because they ‘cannot get what they want’. This is aggressive behaviour and this is the reason why people have strong objections to homosexuals.

You say that one of your goals is “to raise awareness of the heterosexual part of society”. Why do you feel this is necessary?

The Straight Pride mission is to make sure that the default setting for humanity is not forgotten and that heterosexuals are allowed to have a voice and speak out against being oppressed because of the politically correct Government.

Straight Pride feel need to raise awareness of heterosexuality, family values, morals, and traditional lifestyles and relationships.

Your website states that “Homosexuals have more rights than others”. What rights specifically do LGBTI people have that straight people are denied?

Homosexuals do currently have more rights than heterosexuals, their rights can trump those of others, religious or not. Heterosexuals cannot speak out against homosexuals, but homosexuals are free to call people bigots who don’t agree with homosexuality, heterosexuals, religious or not, cannot refuse to serve or accommodate homosexuals, if they do, they face being sued, this has already happened.

Straight Pride believe anyone should be able to refuse service and speak out against something they do not like or support.

There is a hotel in the south of England, called Hamilton Hall which only accepts homosexuals – if this is allowed, then hotels should have the choice and right to who they accommodate.

What has been the response to your campaign?

The response to Straight Pride’s formation has been as expected; hostile, threatening, and aggressive. Homosexuals do not like anyone challenging them or their behaviour.

We have had support from many people saying that if homosexuals can have a Pride March, and then equality should allow Heterosexuals to have one too. After all, the homosexual movement want everyone to have equality.

Why would you say that heterosexuality the “natural orientation”?

Heterosexuality is the default setting for the human race, this is what creates life, if everyone made the decision to be homosexual, life would stop. People are radicalised to become homosexual, it is promoted to be ‘okay’ and right by the many groups that have sprung up.

Marriage is a man and a woman, homosexuals had Civil Partnerships, which was identical to Marriage with all the same rights, they wanted to destroy Marriage and have successfully done so.

If you could pick one historical figure to be the symbol of straight pride (just as figures like Alan Turing, Judith Butler or Peter Tatchell would be for Gay Pride) which would you choose?

Straight Pride would praise Margaret Thatcher for her stance on Section 28, which meant that children were not taught about homosexuality, as this should not on the curriculum.

More recently, Straight Pride admire President Vladimir Putin of Russia for his stance and support of his country’s traditional values.

How do you react to anti-gay attacks and movements in Russia and parts of Africa?

Straight Pride support what Russia and Africa is doing, these country have morals and are listening to their majorities. These countries are not ‘anti-gay’ – that is a term always used by the Homosexual Agenda to play the victim and suppress opinions and views of those against it.

These countries have passed laws, these laws are to be respected and no other country should interfere with another country’s laws or legislation.

We have country wide events which our members attend, and ask people their opinions and views, on such event at Glastonbury this year was very positive with the majority of people we asked, replied they were happily heterosexual.

For the record, Straight Pride did not respond to these questions:

“Pride” movements such as Gay Pride and Black Pride were making the argument that the stigma against them meant that proclaiming their “pride” was an act of liberation from oppression. Can being heterosexually really compare?

A problem that Gay rights activists cite is the issue of bullying, and the effect this can have on young LGBT people. Do you think a similar problem exists with straight children being bullied by gay children?

I will obviously add to this if they do respond.

You can follow Straight Pride on Twitter here and see their Facebook page here.

April 1, 2013

Everything you need to know about Bitcoin

Filed under: Uncategorized — rcomian @ 5:21 pm

After looking in to bitcoin over the last few months I’ve decided to put together everything I’ve found in a blog post.
I’ve tried to make this post largely non-technical, more of a user guide than a technical guide. Its job is to introduce the concepts that are useful, then if you want technical detail about specific parts, they are easy it find.


Bitcoin is designed to be a currency – a form of money that you can use to pay for things. It sits alongside other currencies like the US Dollar, the Euro, the Pound Sterling and the others. And just like you can buy Euros with Dollars and vice versa, you can buy Bitcoins with Dollars – and vice versa.

Bitcoin is unlike other currencies in one crucial way – it’s not controlled by any government. In fact no-one is in control of Bitcoin. It’s managed entirely by a peer-to-peer Network which anyone can join just by running a simple program. If the Network agrees that you have Bitcoins, then you have Bitcoins. What’s clever about this is that it is done in a way which is secure even though it’s distributed. It uses very high strength cryptography in innovative ways and it’s unbreakable with today’s technologies. So it’s secure even though there are no central banks which keep track of how much everyone has.

The currency abbreviation for Bitcoin is BTC. So the currencies listed above would be USD, EUR, GBP & BTC.

Bitcoin infrastructure

Bitcoin uses numbered accounts, much like those of the famous Swiss banks. An account has a number (called an Address), a secret code (called a Private Key) and a value – the number of Bitcoins it contains.

You can have someone send Bitcoins to your account just by giving them its Address, but you need the Private Key to send those Bitcoins to someone elses account. The Private Key is super secret, you never give it to anyone – if you did, they could use it to send your Bitcoins anywhere they liked! Every Address has its own Private Key.

An Addresses is a long complex number and Private Keys are even longer. For example, an Address with a Private Key might look like this: 31uEbMgunupShBVTewXjtqbBv5MndwfXhb, 5Kb8kLf9zgWQnogidDA76MzPL6TsZZY36hWXMssSzNydYXYB9KF.

You’re not expected to type or remember these numbers. When you use Bitcoin, a program on your computer or phone remembers all these numbers for you, collecting a group of Addresses with their Private Keys into a Wallet.

Bitcoin Addresses are different from Swiss bank accounts in that you don’t need to register them with anyone. Whilst a Swiss bank might give you a number to use, you generate Bitcoin Addresses yourself. They’re basically random numbers and you can make as many as you like – millions or billions if that’s what you want – and no-one will care if you do as there’s no cost at all to anyone else.

A Transaction is when Bitcoins are moved between Addresses. The Network keeps track of every Transaction that has ever happened – and if no-one has transferred Bitcoins in to an Address, then that Address has no Bitcoins in it. So whilst you can create as many Addresses as you like, they will all be empty until someone moves some Bitcoins in to them. The Network only knows about Addresses that have had some Bitcoins sent to them.

Because Addresses are basically free, it’s normal to use Addresses just once and forget them once they’re empty. For example, if you shop online and pay with Bitcoins, it’s normal for the website to create a new Address just for you to send Bitcoins to. Once you’ve paid, that Address will never be used again. If you change your mind and don’t pay, that Address will just be forgotten.

Getting Bitcoins

So if you’re starting out with Bitcoins, how do you get them? This is the same problem as with any currency – imagine you were suddenly interested in Japanese Yen – how would get some? You either have to accept them in payment for something, or you buy them through an exchange.

At the moment it is difficult to buy Bitcoins. This is because normal online payments, such as by Visa or PayPal can be reversed by the buyer claiming not to have received the goods. Because Bitcoin transfers aren’t reversible, this means that none of the big players are willing to accept the normal online payment methods for Bitcoins (and those that have went bust because of this fraud).

One method I have found is using VirWox. If you want to move small amounts, you can deposit 50 Euros (or equivalent USD or GBP). The complication is that the only way to buy BTC is with lindon dollars (SLL), so you have to deposit in your currency, buy SLL and then buy BTC. This makes the purchase price very high, but is one of the few reliable ways to purchase Bitcoins with a standard online payment. If you have an avatar in SecondLife you can buy SLL using the SecondLife website, then transfer into VirWox and buy BTC that way – that route might allow you to move more money, at a price.

How much are Bitcoins worth?

This is a tricky question, Bitcoins have no intrinsic value – they don’t represent an amount of gold or the promise of some work done. They’re also not linked to any other currency, so their price doesn’t go up or down with any other currency. Bitcoins are worth exactly what people will pay for them. Fortunately there is a thriving market for Bitcoins and their value is measurable. At the time of writing you could buy 1 Bitcoin for about 62 GBP, 73 EUR or 94 USD. However, that value changes a lot, quickly – just one month ago that price would have been about 20 GBP. The Bitcoin market is relatively small and unknown which makes it volatile, as time goes by it should become more stable, but at the moment it’s extremely unpredictable.

Spending partial Bitcoins

A single Bitcoin is quite a large currency unit. If a Bitcoin is worth 94 USD, it’s pretty much a hundred dollar bill. If the minimum we could give anyone was a hundred dollar bill it would make buying small things like newspapers very difficult. Fortunately, a Bitcoin is just a number associated with an Address. This means that it doesn’t have to be a whole number, we could have 0.5 Bitcoins, or even 0.001 Bitcoins. Currently, Bitcoins are allowed to be split down to 8 decimal places. That means the smallest amount we can transfer is 0.00000001 Bitcoins – which is worth way less than a penny in any currency at the moment.

Incidentally, 0.00000001 Bitcoins is called a Satoshi, after the person who originally invented the system.

Anonymity and tracking Bitcoins

Remember that an Address is just a random number and doesn’t need be registered with any central authority. This means that knowing an Address doesn’t tell you anything about who owns that Address. In fact, there’s nowhere to register your name and personal details within Bitcoin even if you wanted to, it just doesn’t exist. This makes Bitcoin anonymous, very much like paper money. There is something to be aware of, however, in that every single transaction ever made is recorded publicly. This means that if someone can trace an Address to you specifically, then they can just lookup every Transaction and see where all the Bitcoins for that Address came from and went to. There is no way to hide that a Bitcoin came from this Address and went to that Address. Transactions are public knowledge and there’s no way to fudge the history after the fact – it’s not just a ledger, changing the historic records would make all the current records invalid and would be detected and fixed immediately by the Network – this is all part of how Bitcoin keeps its integrity.

In order to maintain as much privacy as possible, the recommendation is to never use an Address more than once if you can avoid it. Most clients have features to help with this, for example, if you have an Address with 5 Bitcoins in it, and you send 2 Bitcoins to someone, the client will create a third Address, called a Change Address and move the remaining 3 Bitcoins to it as part of the Transaction. This way, no-one knows if it was 2 or 3 Bitcoins that got transferred to someone else. If you then pay someone else 1 Bitcoin, it’s coming from that new Address, so no pattern can be built up.

Sometimes you have to re-use Addresses. Donations are one example: some people have a line in their email or forum signature that says “Donations welcome here: (Address)”. Anyone can see how much has been donated that way and now there’s a link from that Address to that person. It’s not the end of the world, it’s just something to be aware of.

Good privacy hygeine helps the whole Network. If everyone’s private, then it very hard for anything to be tracked. If privacy is ignored in some quarters, then it reduces the privacy for everyone as there becomes known points where Bitcoins can be traced through. We’re not just talking about privacy from the government here, we’re talking about privacy from anyone: russian mafia, people traffickers, abusive spouses, stalkers – they all have the same access to the public record as anyone else and are possibly more interested in tracking where certain Bitcoins go than any investigator. Keeping your own privacy helps everyone.

The mechanics of sending Bitcoins

Bitcoins are sent from Address to Address in Transactions. A Transaction has 2 sides, From and To and you can have any number of Addresses on each side. After using Bitcoin for a while, you might find that you have a dozen Addresses, each with some small number of Bitcoins. An example transaction is here:

1Hq53US8QKFnWiUzbDKFWmb4rsGf5Zkacb: 10
17ahwGCn396bVv8WsBKXvGMJUZ9DeiAHa2: 0.02

1KhbSE6ehE7dP6jHrgaGsssVkXkGHLS6Gq: 1.02
1CYq1y7KFbWdDZVMSyy7RkUQhwEurT1hvc: 9

In this case, you can see 10 Bitcoins have been moved out of the first address and 0.02 Bitcoins moved out of the second Address. Normally this would leave the first 2 Addresses empty, but that doesn’t have to be the case.
Then from those Bitcoins, 1.02 Bitcoins are moved into the 3rd Address and 9 Bitcoins moved in to the last Address. See how those Addresses tell you nothing about who owns them? Also, was this a payment for 9 Bitcoins, or 1.02 Bitcoins? Does the person paying have 9 Bitcoins remaining in their wallet, or 1.02? From this transaction, it’s impossible to tell.

So what happens when you decide to send someone some Bitcoins. First, you need an Address to send to. This is straightforward, they should tell you as part of any transaction. Next you normally just tell your Bitcoin client, which is a program running on your computer or phone, how much you want to pay, and what address to pay to. Your client will then automatically select some Addresses with enough Bitcoins in them, create a Change Account and create a Transaction like the one shown above.

Your client will then validate that everything is correct – that you really do have those Bitcoins in those Addresses and that the Addresses you’re sending to are valid Bitcoin Addresses. Your client will then use the Private Keys that it kept with each Address and mark the transaction with proof that you are allowed to send money from that Address. It then sends the Transaction in to the Network.

Every Bitcoin client is part of the peer-to-peer Network. Usually, you will be connected to 8 or so other Bitcoin clients. When you send a Transaction, you send it to each of those 8 other clients. Those clients could be anyone, just people like you who use Bitcoin. Their clients will then validate the Transaction themselves, check that the Private Key marks are legitimate (this is a mathematical operation and doesn’t require looking anything up) and then check that the From Addresses really did have enough Bitcoins in them to do the Transaction. This takes fractions of a second, if the Clients are happy that the Transaction is legitimate, they’ll send the Transaction on to everyone connected to them, who will also validate the Transaction. So you send to 8 other Clients, they each send to 8 other Clients and so on.

In this way, your Transaction will spread across the entire Bitcoin Network in a matter of seconds. The person you’re sending Bitcoins to should see your Transaction arrive in their Client very quickly. If this is a quick, low value transaction then this might be enough to say that all is well.

Because the Transaction is validated by every Client it goes through, it means that if you hack your own Client to send out an invalid Transaction, it will simply be dropped by all the other Clients because it’s invalid. This makes the Network very resilient to hacking – you’d have to own the majority of Clients to stand a chance of your invalid Transaction being accepted. Since the Clients can run every platfrom out there and have multiple implementations, the chances of any one hacker being able to do that is very low.

But there are still some theoretical attacks that can be performed, so if you’re paranoid (and you should be just as paranoid as you are about fake bank notes) you can wait for the next phase – Confirmation.

Confirming Transactions

Confirmation happens when your Transaction gets included in to the Block Chain and your peers all agree that the Block Chain is valid.

The Block Chain is the public record of every Transaction. Special clients, called Miners, put every Transaction in to the Block Chain. Once a Transaction is broadcast in to the Network, it normally takes about 10 Minutes or so to start getting Confirmations that the Transaction is valid, but it can happen more quickly or much more slowly depending on how busy the Network is, how much you paid in Fees and luck.

Fees and Getting your Transaction Confirmed Quickly

It’s worth noting that it’s entirely up to the Miners which Transactions they include in the Block Chain. Since the Network is peer-to-peer, there’s no guarantee that every Miner will see every Transaction, so you can’t say the Block Chain is invalid just because it doesn’t contain a certain Transaction. It’s perfectly valid for parts of the Block Chain to be completely empty – and that does happen.

So why should a Miner spend their valuable computing resources to include your Transaction in to the Block Chain? One thing you could do is sweeten the deal. Every Transaction includes a special field – a Fee. When a Miner puts your Transaction in to the Block Chain and gets it accepted by the Network, they’re allowed to collect the Fees on the Transactions they added. This is one way that Miners get to make money, by collecting all these Fees. For a normal Transaction, an average Fee is between 0.0005 and 0.001 Bitcoins, but if your Transaction includes a large number of Addresses, the Fee should be higher since the Miner has to do more work validating each Address. Your Client software will suggest a Fee. You can pay as much or as little as you want, including nothing at all. However, if you don’t include a Fee, or a Fee that’s below the market rate, then it can take a very long time before anyone gets around to including your Transaction in to the Block Chain. Your Transaction will normally be included within a couple of days, but there are no guarantees. At the moment, in practice, not including a Fee can delay Confirmation by a few hours. As Bitcoin grows in popularity, that could get much worse. Normally, Transactions get included in to the Block Chain in 10 Minutes or so.

How the Block Chain Works

Bitcoin keeps a public record of every Transaction that ever occurred in it. It does this by collecting a bunch of Transactions together and combining them in to a Block. At any moment in time, several thousand special Clients, or Miners are competing to create the next Block. When you send your Transaction out in to the Network, these Miners will grab your Transaction and include it in the next Block they’re trying to make (if it’s valid). Once someone successfully makes a Block, it’s transmitted to all the Clients just like your Transaction was. Just like your Transaction, each Client confirms that all the parts of the Block are valid and that all the Transactions are valid and if so, pass the Block on to their peers. Each Block is connected to the Block before it and forms a chain of Blocks going back in history and containing every Transaction that ever occurred. This Block Chain forms the public record.

Anyone can run a Miner. The Miners all compete to make the next Block and it’s designed to be very hard to do so, so you need very powerful computers to be an effective Miner. The difficulty is tuned so that on average, only one block is made in the entire Network every 10 minutes. This is why getting your Transaction Confirmed takes about 10 minutes or so.

Mining is a difficult process and you don’t need to be a Miner to use Bitcoin. Bitcoin does, however, rely on Miners to create its Block Chain and has an interesting methods to pay Miners for their work.

Bitcoin relies on its Block Chain. There is only one Block Chain: one list of blocks in sequence. Each Block contains information from the previous Block, so if anyone tries to change a Block back in the Chain, even if the change itself is valid, it means that the following Blocks would no-longer be valid (since they would contain information for the original Block) – to make a change to history you would have to re-create every Block from the one you change to the present day. This is generally considered difficult enough that it’s impossible. More, Clients themselves contain the signature of a recent Block in their source code. So if someone did try to recreate the entire history, the Clients wouldn’t accept any changes from before their built-in Block anyway.

Creating a single Chain using multiple distributed Miners is a difficult process, but mechanisms are built in to the Network to ensure that it goes smoothly. First, it’s made so difficult to produce a Block that it’s unlikely that two valid Blocks will be produced at the same time. Secondly, mechanisms are built in so that if two or more competing valid Blocks are produced, the network votes on one which becomes the official Block and the other one is forgotten.

There are several websites that let you examine the Block Chain for yourself. One popular site is

Why run a Bitcoin Miner?

If Mining is so crucial to the Network, yet so difficult, how do we make sure that people can be bothered running the Mining software? After all, it takes effort to set it up, it also ties up your computer and it takes electricity. Yet some people spend thousands of dollars buying custom built machines just to run Miners on. Is this all out of the goodness of their hearts?

To some extent, yes, one reason run a Miner is simply because you like the idea of Bitcoin and want to support it. But you can also do that just by running a client that forms part of the Network does the validation.

The best way to ensure that people do things is to pay them, and Bitcoin has 2 mechanisms for this. One we’ve already seen is to collect the Fees from each transaction. This will be a continuous income and will never expire.

The other way is to take a bounty. Whenever a Miner creates a Block, the first Transaction they put in is a free credit to the Address of their choosing. This is the only kind of Transaction in the system that doesn’t move Bitcoins from one place to another – this is the source of brand new Bitcoins. All the clients are coded to agree that the first Transaction should be this free credit, so it’s considered a valid Block. Since every Miner will be trying to credit their own Address, it means that no two Miners will ever produce exactly the same Block at exactly the same time.

The amount that a Miner is allowed credit themselves goes down over time. For the first 4 years, it was 50 Bitcoins per Block. This halves every 4 years (every client knows this) and at the moment, the bounty is for 25 Bitcoins per Block. This puts an upper limit on the total number of Bitcoins that will ever be in the system. Something like 21 million Bitcoins.

Once all the Bitcoins have been mined, no more will ever be produced. At that point, it’s expected that people will be Mining for the Fees only. It’s quite clever, by reducing the bounty gradually over time, it weens Miners off of expecting the bounty and on to expecting the Fees. If Bitcoin becomes phenomenally successful, the Fees alone should be quite valuable (looking at the Blocks as I write this, the fees alone are often worth between $15 & $30).

You should be able to see why people bother investing in building Mining machines. At 25 Bitcoins a block, winning a block is worth around 2500 USD at todays prices. The general expectation of the value of Bitcoins is that they’ll go up and it’s definitely true that the difficulty of Mining Bitcoins will go up, so some people think it’s worth Mining whatever they can right now.

Ensuring security for Transactions

How exactly do we ensure that a request to move Bitcoins from one Address to another is legitimate? It’s an extremely important question. And it’s impressive to learn that no-one has ever stolen Bitcoins by breaking the system. People have had Bitcoins stolen, but that was by breaking in to computers and stealing the Private Keys, something that is just as commonplace when, for example, people steal credit card numbers through keyloggers, etc.

So Bitcoin is considered secure, but how?

It all hinges on Public Key Cryptography. If you don’t know anything about Public Key Cryptography and Hashes, this section won’t make much sense. However, it’s a critical part of modern computer security, so I suggest you learn at least the basics on how it works.

Bitcoin uses Elliptic Curve Cryptography (ECC), which has not had any known attacks against it and can use much smaller key sizes compared to DSA and RSA. ECC routinely uses 256bit private keys and are considered secure. The key size is important since all Transactions are stored forever, so using a key a quarter of the size will require much less storage over time. ECC may also be faster than RSA or DSA for the same cryptographic strength, although this is less clear.

The Public and Private Keys are generated together using the standard algorithms for ECC. The Address is essentially hash of the Public Key, constructed by using a couple of different hashes and combining them. The result is that you can validate an Address easily and it’s incredibly unlikely to mistype an Address and end up with another valid Address. Since the Address is a hash of the Public Key, you can always tell if a Public Key matches the given Address by deriving the Address from the Public Key again and checking that they match.

ECC allows you to sign a message with the Private Key and validate that signature with the corresponding Public Key. So when a Transaction is created by your Client, it signs the Transaction with the Private Key and adds the Public Key so that anyone can validate the signature. When we receive a valid Transaction, this is what we know:

  • We know the Address, but cannot go from the Address to either the Public Key or Private Key.
  • We know the Public Key, we can validate that the Address matches the Public Key, but we cannot get the Private Key from the Public Key.
  • We know the Transaction has been signed with the Private Key. We can validate that signature with the Public Key and stronly infer that the Private Key known by the person who created the Transaction.

From this, we can tell without looking at any registries whether the person who made the Transaction has the Private Keys that correspond to the Addresses that the Bitcoins are being moved from. We can tell from the Block Chain whether those Address had enough Bitcoins in them at that point in time and we can tell that the Addresses Bitcoins are being sent to are valid and haven’t been mistyped.

The other side of the guarantee is to ensure that no more Bitcoins come out of an Address than go in. This called double spending. In everyday usage, we do this by using physical tokens – we hand over a 10 dollar note to the store clerk, for example. We can’t then go to the next store and hand over the same 10dollar note. Online, our bank keeps a tight track on how much money we have in our account and every transaction reduces the amount available immediately.

In Bitcoin, the Block Chain is the golden standard, but there’s a gap between when we submit the Transaction and it gets incorporated into the Block Chain. This means that it’s conceivable for someone to send a Transaction into one side of the Network and another Transaction into the other side at the same time. Eventually, both Transaction will spread throughout the whole Network, but it could take a few seconds, even some minutes. Consider what happens when a Miner tries to add these Transactions to a Block.

First, the Miner may only have one of the Transactions and this could make it immediately into a Block, if luck is working that way. Then, when any Miner receives the next Transaction, they notice that there isn’t enough Bitcoins in the Address and simply reject it. The Transaction will never be made in to the Block Chain. If any Client or Miner receives both Transactions before either make it in to a Block, then it’s ultimately up to them whether they accept one of the Transactions and drop the other, or drop both.

This double-spend attack is only a problem if you’ve accepted a payment that ultimately doesn’t make it in to the Block Chain. However, it’s extremely difficult to do this in any meaningful way, if you receive a payment notification and it isn’t quickly invalidated, it’s likely to be good. In practice the first transaction is likely to make it in to the Block Chain, if you receive the second Transaction you’re client is likely to have already received the first Transaction and can tell that there aren’t enough Bitcoins available.

However, the only way to be really secure is to ensure that the Tansaction has made it in to the Block Chain and that it has been accepted by the rest of the Network. You Client will let you know when that has happened because the Transaction will either be marked as Unconfirmed, or having some number of Confirmations.

Making Blocks hard to create

One of the last pieces of genius in the Bitcoin Network is how Blocks are made. More particularly, how the Network is tuned so that Blocks are hard to make. This is an important problem, if hundreds of Blocks were made every second, sorting out the order of the Blocks would become an impossible problem. By slowing this down, we can choose one from a small list, even if the final choice is somewhat arbitrary there will be very few conflicts where this is a problem.

So what’s the secret? It’s that each Block has to be hashed to match a specific value. What each Miner does is create a Block header which includes the hash of the previous Block, the hash of the Transactions in the current block and an arbitrary value called a Nonce. This header is then hashed and this becomes the hash used in the next Block’s header. The trick is that the hash is only valid if it conforms to a certain pattern. Specifically, it must start with some number of 0’s. If the header doesn’t hash to that value, the Miner increments the Nonce and tries again. It keeps doing this until it finds a Nonce that causes the header to hash to a valid pattern.

If we consider that the hash is a SHA256 – so there are 256 bits we can play with. If we require just the first bit to be 0, then 1 in every 2 Nonces will create a valid hash. If we require 2 zero bits, then that goes down to 1 in every 4 Nonces. If we require 10 zero bits, then we would have to try on average 1000 Nonces before we find a match.

We can keep requiring more bits until the Miners must hash the header a very large number of times before it finds a valid block. As more Miners join and more powerful machines are created, the number of zero bits required increases, making it more and more difficult to find a valid Block. Every Client can work out how many zero bits are required just by looking at the Block Chain, so if a Miner tries to cheat, the Block is simply rejected by the other Clients in the Network without any central authority saying that it’s bad. The difficulty is tuned so that the whole Network only produces a valid Block every 10 minutes.

As an example, the valid Nonce for a recent Block as I was writing this was: 3,544,225,952 – so it likely took 3.5 billion attempts to find a valid Nonce for that Block. I say likely, because there’s no requirement that a Miner should try Nonces sequentially, it could just try random numbers. But it should be clear that the system can make it very difficult if it needs to. This is why being a Bitcoin Miner needs a lot of computing power – you need to be able to create a lot of hashes in order to stand a chance of finding a Block.

So if every Miner is just incrementing a value until they find a valid Block, why don’t the fastest computers all find the valid Nonce at the same time? And how can anyone except the fastest computers compete?

First of all, not all Miners will include the same Transactions in the same order. Second, each Block contains a unique Transaction – the free credit that every Miner can give themselves. Since each Miner credits their own account, each Block is by definition unique. This means that each Miner is looking for a different Nonce to make their Block valid. This means that if you’re lucky, even the slowest computer could find a valid Block. In general it’s statistical, you will win Blocks at the ratio of your computer’s hash rate / the Network’s hash rate. This makes it a race to create more powerful hardware for Miners, which is overall a good thing for Bitcoin as it means that its very hard to hijack the Block Chain.

Pooling Mining resources

If you look at the hashrates of the Network, things might looks bleak. For example, maybe the Network can do 60 trillion hashes a second. If your machine can do 500 million hashes a second you’d find a valid Block once every 2.5 years. However, at current Bitcoin rates you’d still be earning on average £2 a day. If only there was some way to earn that £2 a day without having to wait (on average) 2.5 years for each payout …

Fortunately, there is a solution. If you join a Mining Pool, you can combine your computing power with hundreds of others, all trying to find the correct Nonce for the same Block. If someone in the Pool finds a valid Block, the Block earnings are distributed to everyone who contributed at a pro-rata rate. The exact details of the payout differ pool-by-pool, but in general you earn between 50% & 100% as much as you would have earned by Mining on your own, but instead of getting a large payout every few years, you get a small payout every few days. How often you get paid depends on the combined hashrate of everyone in the Pool. Most Pools are large enough that they’ll win a Block at least every few weeks. Some are large enough to win several Blocks a day.

April 12, 2012

JSLint with msbuild

Filed under: Development — Tags: , , , , — rcomian @ 10:17 am

Recently, I’ve been working with Javascript for a project at work and want to get all the files linted with JSLint.

Since we’re still a Microsoft shop (how I hate that phrase) and intend to build things with msbuild, it makes sense to get this working under msbuild.

Now so far, I’d had jslint running from the windows version of node from the commandline. Works like a demon, no problems. I’ve also had lint working from a makefile as well. So plugging this into msbuild … how hard could it be?

Well, msbuild includes the wonderful “Exec” task, for running programs during a build. So I thought I’d stick with what I knew and run jslint from there.

Now on the commandline, you can run jslint and it picks it up from the path (and runs it under node automatically). Not so with msbuild. Of course, you can’t reference the jslint.js file directly either, since it’s not an executable. So I had to go with “node ../jslint.js — files to lint”.

No problem, this is fine. And it works too. When the files all lint properly, it’s exactly as you’d want. When there’s an error, however, you just get “error”. The build fails, but the error messages from jslint are suppressed. I think you’d call this “minimal functionality”.

So how to get the error messages visible? First I tried building with /v:diag, just to make sure that they weren’t there at all – they weren’t. Second I tried redirecting stderr->stdout, as I’d read somewhere that the Exec task doesn’t capture stderr. I used various incantations of “2>&1” and “1<&2" depending on who's writing the article and whether you're thinking unix or dos.

Still no dice.

Next I'm starting to think custom actions. Now, I've been playing in Javascript for a while now, and suddenly I'm looking at C# and thinking – seriously? I've got to write the code, then compile it myself, then manually take care of the resulting .dll file and use that? Compiled code can be a real drag. I know, I've spent the last 10 years working with it.

Fortunately, msbuild now contains "inline tasks“, which fit the bill quite nicely. I can now write a quick little task and reference it.
So I did, just a simple process exec set up to run node with the appropriate command line. It all worked fine too. In exactly the same way as the “Exec” task.

Even though I was manually reading both streams and logging both outputs, jslint just doesn’t print the error logs when running from msbuild. It does print the name of the file that fails, so it’s getting output, but that’s all.

Now this isn’t 100%, I have seen the correct failing output on some occasions, even from the Exec task. So it’s something screwy going on with node and jslint. But we’re a microsoft shop, we don’t even use node, so it’s not really worth my time debugging this too much.

I started looking around for alternatives. I did find the Sedo Dream collection of tasks, which includes an jslint build task, but I really want to have a “check this out, build it” workflow, and installing a 3rd party msi just doesn’t cut it. Sometimes there’s no alternative, but I don’t want to add something unless it’s really necessary. There’s no equivalent zip file for this, so I’d have to repackage it myself for distribution, and it looks like quite a large library of “yet another collection of generic msbuild tasks” that is no-doubt wonderful, but we’ve got a lot already.

Finally I came across JSLint for WSH. This looked promising, since it was a single javascript file running in the normal windows environment. It was great.
One of the things I look for with this kind of wrapper is “how do you update it”? I noticed that the last checkin of the jslint part of the package was from August 2011. That’s a little out of date, I know old Crockford updates more often than that.

Looking at the source, I realised that it was simply the core of JSLint with a small executor and reporter tagged onto the end. I pasted the latest version of JSLint into the top of the file and it worked fine, so I’m quite happy that it’s easy enough to keep it up to date, even if the package authors haven’t felt the need to.

But it still wasn’t quite right. First off, it completely balked at my actual javascript files. Turns out they’re unicode with byte order marks(BOM), which JSLint was trying to read as javascript. Now the node version of jslint worked fine, so I looked at what that was doing – and it was as simple as checking the first three characters, seeing if they were the BOM and removing them if they were.
Pasting this code into jslint-for-wsh didn’t work immediately. The original code was using a compare like this:

if (0xEF === content.charAt(0) ...

Whilst it works fine on node, I don’t blame cscript for having trouble with this. Changing it to get the actual character code (ie. a number) to compare against a number works fine:

if (0xEF === content.charCodeAt(0) ...

Finally, I found that all my line numbers were off. After a little head scratching, I realised that it wasn’t counting blank lines. It turns out that cscripts split function ignores blank lines, meaning that JSLint has no way of keeping track or where it actually is in the code.

The way around this was to do a little more work and build up the array of strings as we read the file, rather than read the file as one big text blob and make jslint split it up. It’s a simple while loop and ReadLine rather than just ReadAllText.

To run all this, a simple Exec task works fine:

<Target Name="jslint" Condition="'$(BuildCmd)'!='Clean'">
<!-- Define the files to lint -->
<FilesToLint Include="*.js" />

<Exec Command="cscript //B //NoLogo //T:10 &quot;..\JSLint\jslint-for-wsh.js&quot; &quot;%(FilesToLint.FullPath)&quot;" />

And now we have fully linted code at every build.

Take care

November 27, 2011

Is Exception Handling Broken on .Net?

Filed under: Development — rcomian @ 10:47 pm

Here’s a challenge for you. Take a .Net project of yours (a large one, preferably in c#) and look at how much exception handling is done. Now, for every try/catch statement, prove that you’re catching only the exceptions that you expect to be thrown.

This is the challenge facing anyone who wants to correctly handle their error conditions. It’s also a challenge that you should be willing to step up to if you want to get your code windows logo certified.

There’s a problem if you miss exceptions that should be caught. If you do that, your application will simply crash, unceremoniously losing anything your customer hasn’t saved and possibly earning you an angry phone call. In terms of customer satisfaction, it’s clearly the #1 thing to avoid doing.

There’s also a problem if you catch exceptions that you shouldn’t. The worst case scenario there is that you’re corrupting user data. More likely, you’re hiding the place where an original problem occurred and saving the actual crash for later. If you’re really good at catching everything you shouldn’t, you might be limping along in a bad enough state where the user has to just stop and restart the application before things start working properly again. I know I’ve been there as a user many times.

If you’re signed up for Windows Error Reporting, catching too many exceptions means that you don’t get notified of real problems with your application, meaning there are a whole set of problems you don’t see, or end up manifesting in some unexpected and un-debuggable place.

So exception handling appears to be this wonderful balancing act, it’s essential that you get just enough and not too much. I assume you’d rather be pro-active and double check that you’ve got all the error cases correctly covered rather than let your customers find them with a crash.

Good luck.

Seriously, I find it hard to believe I’m writing this, but you can’t do it.

Sure, you can read the documentation for every method you call, check what you need to do for every documented exception and handle it. That’s actually quite a lot of work, especially if you’re doing it for an existing project. If you’re lucky all the exceptions will be documented, they’ll even be documented correctly and fully, including all the 3rd party libraries you use … and all the internal ones.

And once you’ve done all that, the only way you can verify that it’s correct in your next release is to do it again. Now what happens when you go to the next version of a 3rd party library or .Net. You need to examine every method you use to see if any new exceptions have been added, old ones removed, or the handling requirements of the others changed. Then check that your code handles them all correctly, in every location.

You’d think there would be a tool to help you with this. But there’s nothing in .Net or visual studio that offers any clue. One company, RedGate, did do this at one point. However, reading their website, they were having trouble keeping the list meaningful, and with .Net 4, they gave up.

So even if you catch only the exceptions that are actually thrown, you’re safe, right?
Of course not. Because someone, somewhere will be throwing a raw “System.Exception”, and the only way to catch that is to catch “System.Exception” and that catches everything. So now you need to catch everything, then check to see if it’s what you really thought it was and re-throw everything else.

Of course, you’re re-throwing with just the “throw” statement, right? Not the “throw obj” version of the statement. Everywhere. Sure?

Lets assume you can change this code to throw something sensible instead of a raw System.Exception. How about a System.ApplicationException, that’s bound to be just the exception we were expecting to catch, right? If you answered “No”, you’re getting the hang of this. The .Net framework will happily throw its own exceptions which are derived from ApplicationException, even if you haven’t derived anything from it yourself. So you still need to check the actual exception type after you catch it just to make sure it’s something you expected. No, you need to ignore what Microsoft have given you and make your own exception hierarchy derived directly from System.Exception.

Ok, so we’ve done all that. We’ve manually gone through the latest documentation for every method we ever call and checked that we’re handling all the exceptions correctly. We’re not throwing any base-class exceptions that aren’t our own and we’re re-throwing everything correctly. We’re clear now, right?

Ah, you’re catching on, not yet. Let’s look at that web service you use, you’re catching a WebException, right? You’ve checked the inner exception as well, haven’t you? Why? Because although Microsoft will deny you a windows logo certification if you catch too many exceptions, they don’t follow their own advice. If something like AccessViolationException occurs whilst calling a web service, it will get wrapped up very nicely in a WebException and presented to you just like a timeout, or a connection failure, or a 500 error. So you might need to leak a WebException.

So what could InnerException be in these cases then? Surely that’s documented? Yeah, right.

Why is this such a hard problem?

It strikes me after much pain working through all this, that .Net’s exception handling is fundamentally flawed.

Firstly, we rely on documentation to work out what exceptions can be called. We don’t rely on documentation to work out what parameters to pass, or what return values we get, yet handling the error conditions is left to our discretion.

Secondly, there are some exceptions which could get thrown at any point and would probably never be thrown by your own code. The clearest cases are things like StackOverflowException and AccessViolationException. Perhaps NullReferenceException is in this list too. We would almost never want to catch these. These are genuine coding errors and never something to do with the state of our user’s workflow. Yet we have no way to determine what exceptions fall into this category.

The exception hierarchy is a joke. The knee-jerk reaction for how to categorize AccessViolationException is to say “It derives from System.SystemException, exceptions derived from here must be serious!”. And then you find yourself leaking when you parse a user’s badly formatted xml snippet, because XmlException is also derived from there. These aren’t isolated incidents, you can’t tell anything meaningful from the .Net exception hierarchy. The namespace usage is slightly better, but still useless.

If an exception is going to be a base class, it must never be thrown. There’s a built in mechanism for this, it’s called the abstract class. An abstract base class will never be thrown directly preventing a whole category of errors. And once an exception class is made concrete, it must never be subsequently derived from. We have a mechanism for this too – sealed classes can’t be derived from. Yet these mechanisms are not made use of in the built in exception hierarchy.

So what can we do to mitigate these problems?

Well, at the end of the day, your exception handling is your best guess. You can’t make it any better than that, so 90% is the way to go. Sign up to Windows Error Reporting and find what you’re leaking in the field. It’s a shame that as a top tier development company, that’s the best you can do, but it’s the best that anyone can do, so don’t feel bad.

In those cases where a leak would be a truly terrible thing, do what you did anyway.
Let’s be honest, your windows form event handlers are all wrapped with “try {} catch (Exception e) {}” aren’t they. Because after years of pissed off customers that’s what you’ve resorted to. Yes, your deeper code all does proper exception handling as you want, but the lure of the safety net was too much for your bottom line to resist. Well, in those critical, high level places, keep doing that. But try to categorize the list of exceptions you don’t want to catch and make sure you leak them. Unless you’re doing something really funky, you don’t want to hide an AccessViolationException.

And of course, unit testing can help. It’s not 100%, firstly because your coverage won’t be 100%, also because you’re still relying on documentation to work out what exceptions you’re supposed to be emulating, and also because a lot of exceptions that happen in practice are environmental problems, like dropped connections.

But the best way to handle your exceptions?


At least, don’t if you can get away with it. Just crash. In many cases that’s the best thing to do. If you’re a service, it’s a no-brainer, assuming your service will be auto-restarted. A web page returning an error is a good thing for visibility of errors.
They get logged, and you can find out about every single one of them as they happen.

Avoid getting in to situations where exceptions are possible. Always use “TryParse” instead of “Parse”, for example. Always check that the file exists before opening it. That sort of thing. If you’re thinking of throwing an exception, offer a way for people to find out what the situation is before you need to throw it.

Going 100% exception free isn’t possible. Some databases throw exceptions that demand the transaction be retried. That’s aggravating, but handle it and leak everything else.
A web service call should be retried a couple of times if certain error codes get returned. But anything else, let it go.
Keep it minimal.

The cost of crashing a windows application is higher. But why are you writing thick client applications in this day and age? Perhaps you can treat it as a container for applets that crash with impunity – I’m thinking of Chrome’s “aw snap!” errors here.

Crash, seriously. It’s the future.

April 5, 2011

Playing with long polls

Filed under: Development — rcomian @ 12:11 pm

Server Push is a technique to get data down to a client as soon as it becomes available. It’s also got other names such as Long Poll and Comet.

There’s nothing particularly fancy about the technique, it’s essentially a web request that takes a really long time. The request stays open until whatever data the client’s interested in is available on the server.

Using it means that clients can get updates almost as soon as they happen on the server without polling. Polling is generally inefficient and wasteful of both network resources and server processing. It’s up to the client to work out when they poll and you can never really tell when or if a client is going to get the update.

Server push is really easy to configure – if your client can make a web request to your server, then you can use this technique without installing any fancy software or opening any unexpected ports, it just works.

The trick with Server Push is to have a very efficient server. Traditional web servers would start up a thread for each and every connection. Whilst this works when you’ve got very limited simultaneous requests, it doesn’t scale up very well when dealing with lots of requests that, for most of their lives, aren’t doing anything.

Frameworks like python’s Twisted, ruby’s EventMachine and javascript’s Node.Js have shown a way where this can be really efficient – don’t spawn threads, just process them using events. So we know it’s possible – are there any other ways to do it?

I’ve been looking at the problem using .Net’s HttpListener, which is actually quite efficient. It’s essentially the same as http.sys, but in managed code. Web requests come in through a call to GetContext – but no extra thread is created, you just get an object representing the request, and you respond to it as you see fit.

It’s quite simple then, to put the requests that come in straight onto a list and wait for something to happen to your data. When that something happens, simply iterate over the list sending the updates to the clients. You don’t have to reply from the same thread that accepted the request – indeed there’s no need to reply immediately at all.

It’s a beautifully simple principle and works really well. I’ve had 135k active connections on my test machine and it doesn’t even break a sweat. Once the connections are made, the only resource used is RAM, and those 135k connections only used about 800mb. No handles, threads or cpu are consumed on the server, so there’s no fixed upper limit that I can find.

What’s even better, is that clients will appear to wait forever for the results. I’ve had tests running for >12hrs, then receiving the response and carrying on like nothing was amiss.

It’s also worth noting that HttpListener can be a really good server in it’s own right, by caching the replies sent down to each client, I’ve been able to consistently handle 150k requests a minute on a quad core server over 100mb/s network connection.

There’s a few things I’ve found that are worth noting:

Accept connections as quickly as possible

In essence this means getting to the next call to GetContext as quickly as possible. There are two methods you can use for this – have a single loop that continuously calls GetContext and farms out the actual work in some way, or use BeginGetContext, and in the callback method, call BeginGetContext again immediately, before you process your current request.
Remember that List is backed by an array.
If you’re adding hundreds of thousands of items to it in a time critical context (such as within a lock block) it gets really inefficient as it copies the data each time it grows the backing array. You can either pre-allocate if you know roughly how many items will be in it, or just use a LinkedList – which has constant time adds and removes – which is what you’ll likely be doing most of the time anyway. List is really bad if you start paging, since it’s essentially copying that data at disk speed.
Don’t block the list
The list is central to this processing, and every lock on it reduces your ability to service requests. An example is in sending out the replies – it may not take long to loop over 100k objects in a list and call BeginWrite on each one, but it takes a few seconds. It’s much better to take your lock (which isn’t on the list itself), copy the list reference and create a new list in its place. Then you can iterate through your list of subscribers at leisure without hampering the server’s ability to accept new requests. If you’ve got a lot of subscribers, those clients that you serviced at the beginning of the list will already be queueing up for the next update by the time you get through your list, so this is far from academic.
Don’t throw that exception
Time really is of the essence, whether you’re getting an entry onto a list or sending the data out, it needs to be quick. Throwing exceptions is slow. So even though it’s more clunky, use that TryParse instead of Parse and structure your logic so that you’re not throwing trivial failures into the exception handler for mere coding convenience.
Cache Everything
This setup can be really efficient if the same data is being sent to everyone. In this case, you can render the data you send to one client into a byte array and keep it, so that every subsequent client receives a copy of that same byte array.
Use a versioned REST style interface
This is just a suggestion, but use the url to locate what resource you want, and have the client send a query string stating what version they’ve got. If your current version doesn’t match or the client doesn’t provide a version, just send back the latest version to the client without queueing anything. If the version does match, add it to the subscriber list for when the next version becomes available. It goes without saying that you need to synchronise this so that no-one gets left behind.
Use the background threads
I’m a little more cautious about this advice, but it’s worked for me in my testing – using the asynchronous calls for everything does appear to work really nicely. One note of caution – the thread count can go up alarmingly once things start going wrong (actually, I’ve only seen this on the client, not the server). It’s only a temporary situation, however, once the issue is resolved the threadpool sorts itself out nicely.
Have a plan for dropped subscribers
In my proof of concept I didn’t really care, but in reality, the subscribers can disconnect. Clearly, you don’t want to keep these in your list any longer than necessary. I haven’t yet found a way of cleanly and reliably detecting a failed connection using the HttpListener classes, although I haven’t looked too hard. It might be the case that the best you can do is return an empty response and ask them to reconnect. Whilst this might not sound any better than polling, it’s still fairly lightweight, responsive, and the server can dynamically determine what it needs to do, modifying these ‘poll periods’ according to load.

I’ve actually found it much harder to write a client that makes 50k requests than a server that accepts 100k requests.

Know your http limits
By default, only 2 http concurrent connections are made – per machine to any particular server. Use kb282402 to fix this if you’re using a wininet based client, and set ServicePointManager.DefaultConnectionLimit if you’re using .Net’s WebRequest class
Know your ports
Each request to the server requires an open, bound port on the client. There are only ~65k available for each ip-address. So if you want >65k connections from one machine, you’ll have to have multiple ip addresses on the network and bind your requests to them explicitly. Also, windows varies how many ports are available for this use. My developer machine allowed everything >1024 for this, whereas Vista/2008 and above only use ports >49124. This limits you to about 16k outbound connections of any kind. Use kb929851 to configure this. Also keep in mind that ports don’t become available just because the connection closed. Ports can stay unavailable for 4 minutes whilst they mop-up any late server packets. This can be reduced by the OS if there are a lot of ports in this state, but it can bite you if you’re trying to recycle your 50,000 connections.
Know your handles
With .Net’s WebRequest class, and probably with any outbound network connection, a handle is created for each one. Windows XP limited the number of open user handles to 10,000, in windows 2008R2 I’ve not had any trouble when running 50,000 connections, but it’s something else to be aware of.
Any server can be flooded, live with it
A server can only accept so many connections a second. The network stack can itself queue up the backlog of connections, but the size of the backlog is limited (the best documentation I can find says between 5 & 200). Beyond this, the network stack will reject the connections without your server code ever knowing that they were there. This is the main reason why it’s so important to accept those connections quickly. But even so, if you’re making 50,000 asynchronous connections to a server, from 4 clients, at the same time for a total of 200k connections, that could all take place in a few seconds. No server can handle that, you will get errors, and you must have a way of recognizing them, handling them gracefully and re-trying them.

I wonder how many people can make use of this technique, how many people are using this, and what other issues there are with this. Has anyone found a different solution for the problem?

December 2, 2010

std::vector indexes have a type – and it’s not “int”

Filed under: Development — Tags: , — rcomian @ 12:51 pm

Just a quick aside, I’ve seen a fair bit of code recently of this kind:

std::vector<int> myvector = …
for (int i = 0; i < myvector.size(); ++i)

This code produces a warning when compiled, and it’s right to do so.

Vectors are awkward beasts – this means that they fit in with the rest of c++. In particular, the type of the variable “i” above, is not an int, unsigned int or a dword, it’s std::vector<int>::size_type.

This size type is also the numeric value accepted by the [] operator. Int will pass into this operator without warning, but it’s not correct. And going the other way – especially via myvector.size(), is a potential issue.

So the for loop above should be written:
std::vector<int> myvector = …
for (std::vector<int>::size_type i = 0; i < myvector.size(); ++i)

Or, even better, use iterators!

See also:

Older Posts »

Blog at