Developer Drain Brain

October 23, 2015

OOP 101 – Why separate Classes and Instances?

Filed under: Development — rcomian @ 7:15 pm

In the previous post we saw that we can define a class and then create an instance of that class to use it.

A serious question is why do we have to do this? Why instantiate a class?

The answer is that we can use multiple instances of a class at the same time. So we can write one class that represents one piece of functionality and re-use it multiple times.

Lets think of a concrete example: A button.

On any application on many pages you will often see multiple buttons. They all work roughly the same way, they draw a piece of text and a box around them, when you press on them they animate to look like they’ve been pressed and then something happens.

Software is all about working out what code can be re-used, and a button sounds like an ideal thing to reuse. We won’t go into a full button definition, but we could define a simple button to demonstrate our concepts.

Let’s first work out what we want our button to do for us, I’m going to make up some requirements:

  1. We need to be able to say what text should be on the button
  2. We need to be able to tell the button where on the screen it should be drawn using X & Y coordinates
  3. We need to tell the button to draw itself normally
  4. We need to tell the button when it has been clicked so that it can draw itself in the “pressed” state

class MyButton
{
  string ButtonText;

  int PositionX;
  int PositionY;

  void Draw() {
  }

  void Click() {
  }
}

We should be able to see how this class fits our requirements. First off, we have some data associated with the class. ButtonText is a member variable and will hold the text that gets displayed on the button. This lets us choose what text appears on the button, which is requirement 1.

Next we have 2 member variables: PositionX and PositionY. These will hold where on the screen we need to draw the button, which meets requirement 2.

Then we have the Draw method. This is empty in our example, but in real life it would use the ButtonText and the PositionX and PositionY data to draw the button with the correct label at the correct location on the screen. This would meet requirement 3.

Then finally we have the Click method. Again this is empty in our example, but it would draw the animation for when the button was clicked, making it look like it was pressed down and then pop out again. Again it would use the ButtonText and PositionX and PositionY data to do this, and meet requirement 4.

So as before, with just the code above, we haven’t actually done anything – we’ve just decalared a class. We don’t have any buttons, we need to make some, say what their text is and where they are then tell the code to draw them. So if we had a program that needed an “OK” and a “Cancel” button on there, we could use them like this:


MyButton okButton = new MyButton();
okButton.ButtonText = "OK";
okButton.PositionX = 500;
okButton.PositionY = 500;
okButton.Draw();

MyButton cancelButton = new MyButton();
cancelButton.ButtonText = "Cancel";
cancelButton.PositionX = 600;
cancelButton.PositionY = 500;
cancelButton.Draw()

So what we do is first create an instance of MyButton and store it in the okButton variable. Internally a piece of memory is reserved that is large enough to store the ButtonText and PositionX and PositionY member variables for this button and the location of this memory is given the name okButton in our code.

Very simply, we can say that things look like this:

okbutton

Next we use the okButton variable and say “Inside this variable’s memory location that was reserved, set the ButtonText data to the string "OK"“.
Then we do the same for PositionX and PositionY, storing where we want this button to be drawn.

Updating the previous diagram with more detail, we get something like this:
okbutton2

Finally we say, “call the Draw method of the MyButton class, but whenever that method uses ButtonText or PositionX or PositionY, use the data in the memory reserved for the variable okButton“.
This is all said by the very much shorter syntax: okButton.Draw();

So at this point, we would have a button drawn on the screen saying “OK”. And we’ve got our variable okButton so that we can talk to this button later.

Next in the code we instantiate a second button. When we do this, we reserve a new, completely different area of memory, which again is big enough to store the data for the MyButton class. This memory is in a different location to the memory for the previous instance and given the name cancelButton in our code.
Once we’ve allocated, we then set the contents of that memory location appropriately, we set the string for the ButtonText and the PositionX and PositionY member variables in that memory location.

At this point, our program and its memory would look something like this:
okbutton3

We then call Draw again, which runs the Draw method in the MyButton class again. But this time, it uses the data stored in the cancelButton memory location to do its work, not the data for okButton. This lets us have 2 buttons on the screen which are completely independent.

So what about that Click method? We’d call this when we determined that a user had pressed the button somehow (I’m being very woolly on the details, every UI framework already has a set of classes for doing all this, and they’re all different). Basically, we’d keep hold of our button variables somewhere, and later on in the code, we’d call the Click method at the appropriate time, just like this:


... // Somehow determine that the user had clicked OK
okButton.Click();
... // Do whatever we need to do on OK

The would run the code we wrote in the Click method on the MyButton class. But it would do so using the data for the okButton. In this way, the okButton – and only the okButton – would show the animation of being clicked.

OOP 101 – Classes, Instances and Objects

Filed under: Development — rcomian @ 3:10 pm

For me, the most basic thing to understand when dealing with any kind of object oriented language is the relationship between classes and objects.

Lets go through the mechanics of what’s going on from a developer’s point of view.

If you’ve written any applications in java or C# you’ve certainly written some code that looks like this:


class MyClass()
{
  string MyMethod() {
    return "My string";
  }
}

It doesn’t really matter what language you’re talking about here, the idea is the same in c++, java and c#. The exact syntax will vary, so don’t try to compile that code, but the overall idea is the same.

What we’ve done is define a class called MyClass. It contains a single method called MyMethod and that method just returns the string value "My string".

It’s worth knowing that on it’s own, writing, compiling and running that code won’t actually do anything at all. Not a thing. We’ve just set some stuff up.

To get it to do something you’ll need to instantiate that class. You will probably also have written something along the lines of this:


MyClass myObject = new MyClass();

Now we’ve done something a little more, we’ve taken the class we created before (MyClass) and created an object which is an instance of that class. We can refer to that object using the name myObject.
An object is an actual thing that lives in memory, it can have data associated with it and have a set of methods on it which we can call. Because that data and those methods are defined by a class, we call that object an instance of that class.
myObject is an instance of MyClass, so we have access to all the things that MyClass contained. That is, a class is just a definition, an object is a thing we can actually do things with.

So what can we do with an instance of MyClass? Because MyClass contains one method: MyMethod, we can call it:


MyClass myObject = new MyClass();
string returnedString = myObject.MyMethod();

This code creates an instance of MyClass, just like before, then calls the MyMethod method on that instance.
This method returns the string "My string", which itself gets stored in the string variable returnedString, which we can then use for other things.
So this code doesn’t do much, but serves as a starting point for understanding the difference between a class and an object.

April 12, 2012

JSLint with msbuild

Filed under: Development — Tags: , , , , — rcomian @ 10:17 am

Recently, I’ve been working with Javascript for a project at work and want to get all the files linted with JSLint.

Since we’re still a Microsoft shop (how I hate that phrase) and intend to build things with msbuild, it makes sense to get this working under msbuild.

Now so far, I’d had jslint running from the windows version of node from the commandline. Works like a demon, no problems. I’ve also had lint working from a makefile as well. So plugging this into msbuild … how hard could it be?

Well, msbuild includes the wonderful “Exec” task, for running programs during a build. So I thought I’d stick with what I knew and run jslint from there.

Now on the commandline, you can run jslint and it picks it up from the path (and runs it under node automatically). Not so with msbuild. Of course, you can’t reference the jslint.js file directly either, since it’s not an executable. So I had to go with “node ../jslint.js — files to lint”.

No problem, this is fine. And it works too. When the files all lint properly, it’s exactly as you’d want. When there’s an error, however, you just get “error”. The build fails, but the error messages from jslint are suppressed. I think you’d call this “minimal functionality”.

So how to get the error messages visible? First I tried building with /v:diag, just to make sure that they weren’t there at all – they weren’t. Second I tried redirecting stderr->stdout, as I’d read somewhere that the Exec task doesn’t capture stderr. I used various incantations of “2>&1” and “1<&2" depending on who's writing the article and whether you're thinking unix or dos.

Still no dice.

Next I'm starting to think custom actions. Now, I've been playing in Javascript for a while now, and suddenly I'm looking at C# and thinking – seriously? I've got to write the code, then compile it myself, then manually take care of the resulting .dll file and use that? Compiled code can be a real drag. I know, I've spent the last 10 years working with it.

Fortunately, msbuild now contains "inline tasks“, which fit the bill quite nicely. I can now write a quick little task and reference it.
So I did, just a simple process exec set up to run node with the appropriate command line. It all worked fine too. In exactly the same way as the “Exec” task.

Even though I was manually reading both streams and logging both outputs, jslint just doesn’t print the error logs when running from msbuild. It does print the name of the file that fails, so it’s getting output, but that’s all.

Now this isn’t 100%, I have seen the correct failing output on some occasions, even from the Exec task. So it’s something screwy going on with node and jslint. But we’re a microsoft shop, we don’t even use node, so it’s not really worth my time debugging this too much.

I started looking around for alternatives. I did find the Sedo Dream collection of tasks, which includes an jslint build task, but I really want to have a “check this out, build it” workflow, and installing a 3rd party msi just doesn’t cut it. Sometimes there’s no alternative, but I don’t want to add something unless it’s really necessary. There’s no equivalent zip file for this, so I’d have to repackage it myself for distribution, and it looks like quite a large library of “yet another collection of generic msbuild tasks” that is no-doubt wonderful, but we’ve got a lot already.

Finally I came across JSLint for WSH. This looked promising, since it was a single javascript file running in the normal windows environment. It was great.
One of the things I look for with this kind of wrapper is “how do you update it”? I noticed that the last checkin of the jslint part of the package was from August 2011. That’s a little out of date, I know old Crockford updates more often than that.

Looking at the source, I realised that it was simply the core of JSLint with a small executor and reporter tagged onto the end. I pasted the latest version of JSLint into the top of the file and it worked fine, so I’m quite happy that it’s easy enough to keep it up to date, even if the package authors haven’t felt the need to.

But it still wasn’t quite right. First off, it completely balked at my actual javascript files. Turns out they’re unicode with byte order marks(BOM), which JSLint was trying to read as javascript. Now the node version of jslint worked fine, so I looked at what that was doing – and it was as simple as checking the first three characters, seeing if they were the BOM and removing them if they were.
Pasting this code into jslint-for-wsh didn’t work immediately. The original code was using a compare like this:

if (0xEF === content.charAt(0) ...

Whilst it works fine on node, I don’t blame cscript for having trouble with this. Changing it to get the actual character code (ie. a number) to compare against a number works fine:

if (0xEF === content.charCodeAt(0) ...

Finally, I found that all my line numbers were off. After a little head scratching, I realised that it wasn’t counting blank lines. It turns out that cscripts split function ignores blank lines, meaning that JSLint has no way of keeping track or where it actually is in the code.

The way around this was to do a little more work and build up the array of strings as we read the file, rather than read the file as one big text blob and make jslint split it up. It’s a simple while loop and ReadLine rather than just ReadAllText.

To run all this, a simple Exec task works fine:

<Target Name="jslint" Condition="'$(BuildCmd)'!='Clean'">
<!-- Define the files to lint -->
<ItemGroup>
<FilesToLint Include="*.js" />
</ItemGroup>

<Exec Command="cscript //B //NoLogo //T:10 &quot;..\JSLint\jslint-for-wsh.js&quot; &quot;%(FilesToLint.FullPath)&quot;" />
</Target>

And now we have fully linted code at every build.

Take care

November 27, 2011

Is Exception Handling Broken on .Net?

Filed under: Development — rcomian @ 10:47 pm

Here’s a challenge for you. Take a .Net project of yours (a large one, preferably in c#) and look at how much exception handling is done. Now, for every try/catch statement, prove that you’re catching only the exceptions that you expect to be thrown.

This is the challenge facing anyone who wants to correctly handle their error conditions. It’s also a challenge that you should be willing to step up to if you want to get your code windows logo certified.

There’s a problem if you miss exceptions that should be caught. If you do that, your application will simply crash, unceremoniously losing anything your customer hasn’t saved and possibly earning you an angry phone call. In terms of customer satisfaction, it’s clearly the #1 thing to avoid doing.

There’s also a problem if you catch exceptions that you shouldn’t. The worst case scenario there is that you’re corrupting user data. More likely, you’re hiding the place where an original problem occurred and saving the actual crash for later. If you’re really good at catching everything you shouldn’t, you might be limping along in a bad enough state where the user has to just stop and restart the application before things start working properly again. I know I’ve been there as a user many times.

If you’re signed up for Windows Error Reporting, catching too many exceptions means that you don’t get notified of real problems with your application, meaning there are a whole set of problems you don’t see, or end up manifesting in some unexpected and un-debuggable place.

So exception handling appears to be this wonderful balancing act, it’s essential that you get just enough and not too much. I assume you’d rather be pro-active and double check that you’ve got all the error cases correctly covered rather than let your customers find them with a crash.

Good luck.

Seriously, I find it hard to believe I’m writing this, but you can’t do it.

Sure, you can read the documentation for every method you call, check what you need to do for every documented exception and handle it. That’s actually quite a lot of work, especially if you’re doing it for an existing project. If you’re lucky all the exceptions will be documented, they’ll even be documented correctly and fully, including all the 3rd party libraries you use … and all the internal ones.

And once you’ve done all that, the only way you can verify that it’s correct in your next release is to do it again. Now what happens when you go to the next version of a 3rd party library or .Net. You need to examine every method you use to see if any new exceptions have been added, old ones removed, or the handling requirements of the others changed. Then check that your code handles them all correctly, in every location.

You’d think there would be a tool to help you with this. But there’s nothing in .Net or visual studio that offers any clue. One company, RedGate, did do this at one point. However, reading their website, they were having trouble keeping the list meaningful, and with .Net 4, they gave up.

So even if you catch only the exceptions that are actually thrown, you’re safe, right?
Of course not. Because someone, somewhere will be throwing a raw “System.Exception”, and the only way to catch that is to catch “System.Exception” and that catches everything. So now you need to catch everything, then check to see if it’s what you really thought it was and re-throw everything else.

Of course, you’re re-throwing with just the “throw” statement, right? Not the “throw obj” version of the statement. Everywhere. Sure?

Lets assume you can change this code to throw something sensible instead of a raw System.Exception. How about a System.ApplicationException, that’s bound to be just the exception we were expecting to catch, right? If you answered “No”, you’re getting the hang of this. The .Net framework will happily throw its own exceptions which are derived from ApplicationException, even if you haven’t derived anything from it yourself. So you still need to check the actual exception type after you catch it just to make sure it’s something you expected. No, you need to ignore what Microsoft have given you and make your own exception hierarchy derived directly from System.Exception.

Ok, so we’ve done all that. We’ve manually gone through the latest documentation for every method we ever call and checked that we’re handling all the exceptions correctly. We’re not throwing any base-class exceptions that aren’t our own and we’re re-throwing everything correctly. We’re clear now, right?

Ah, you’re catching on, not yet. Let’s look at that web service you use, you’re catching a WebException, right? You’ve checked the inner exception as well, haven’t you? Why? Because although Microsoft will deny you a windows logo certification if you catch too many exceptions, they don’t follow their own advice. If something like AccessViolationException occurs whilst calling a web service, it will get wrapped up very nicely in a WebException and presented to you just like a timeout, or a connection failure, or a 500 error. So you might need to leak a WebException.

So what could InnerException be in these cases then? Surely that’s documented? Yeah, right.

Why is this such a hard problem?

It strikes me after much pain working through all this, that .Net’s exception handling is fundamentally flawed.

Firstly, we rely on documentation to work out what exceptions can be called. We don’t rely on documentation to work out what parameters to pass, or what return values we get, yet handling the error conditions is left to our discretion.

Secondly, there are some exceptions which could get thrown at any point and would probably never be thrown by your own code. The clearest cases are things like StackOverflowException and AccessViolationException. Perhaps NullReferenceException is in this list too. We would almost never want to catch these. These are genuine coding errors and never something to do with the state of our user’s workflow. Yet we have no way to determine what exceptions fall into this category.

The exception hierarchy is a joke. The knee-jerk reaction for how to categorize AccessViolationException is to say “It derives from System.SystemException, exceptions derived from here must be serious!”. And then you find yourself leaking when you parse a user’s badly formatted xml snippet, because XmlException is also derived from there. These aren’t isolated incidents, you can’t tell anything meaningful from the .Net exception hierarchy. The namespace usage is slightly better, but still useless.

If an exception is going to be a base class, it must never be thrown. There’s a built in mechanism for this, it’s called the abstract class. An abstract base class will never be thrown directly preventing a whole category of errors. And once an exception class is made concrete, it must never be subsequently derived from. We have a mechanism for this too – sealed classes can’t be derived from. Yet these mechanisms are not made use of in the built in exception hierarchy.

So what can we do to mitigate these problems?

Well, at the end of the day, your exception handling is your best guess. You can’t make it any better than that, so 90% is the way to go. Sign up to Windows Error Reporting and find what you’re leaking in the field. It’s a shame that as a top tier development company, that’s the best you can do, but it’s the best that anyone can do, so don’t feel bad.

In those cases where a leak would be a truly terrible thing, do what you did anyway.
Let’s be honest, your windows form event handlers are all wrapped with “try {} catch (Exception e) {}” aren’t they. Because after years of pissed off customers that’s what you’ve resorted to. Yes, your deeper code all does proper exception handling as you want, but the lure of the safety net was too much for your bottom line to resist. Well, in those critical, high level places, keep doing that. But try to categorize the list of exceptions you don’t want to catch and make sure you leak them. Unless you’re doing something really funky, you don’t want to hide an AccessViolationException.

And of course, unit testing can help. It’s not 100%, firstly because your coverage won’t be 100%, also because you’re still relying on documentation to work out what exceptions you’re supposed to be emulating, and also because a lot of exceptions that happen in practice are environmental problems, like dropped connections.

But the best way to handle your exceptions?

Don’t.

At least, don’t if you can get away with it. Just crash. In many cases that’s the best thing to do. If you’re a service, it’s a no-brainer, assuming your service will be auto-restarted. A web page returning an error is a good thing for visibility of errors.
They get logged, and you can find out about every single one of them as they happen.

Avoid getting in to situations where exceptions are possible. Always use “TryParse” instead of “Parse”, for example. Always check that the file exists before opening it. That sort of thing. If you’re thinking of throwing an exception, offer a way for people to find out what the situation is before you need to throw it.

Going 100% exception free isn’t possible. Some databases throw exceptions that demand the transaction be retried. That’s aggravating, but handle it and leak everything else.
A web service call should be retried a couple of times if certain error codes get returned. But anything else, let it go.
Keep it minimal.

The cost of crashing a windows application is higher. But why are you writing thick client applications in this day and age? Perhaps you can treat it as a container for applets that crash with impunity – I’m thinking of Chrome’s “aw snap!” errors here.

Crash, seriously. It’s the future.

April 5, 2011

Playing with long polls

Filed under: Development — rcomian @ 12:11 pm

Server Push is a technique to get data down to a client as soon as it becomes available. It’s also got other names such as Long Poll and Comet.

There’s nothing particularly fancy about the technique, it’s essentially a web request that takes a really long time. The request stays open until whatever data the client’s interested in is available on the server.

Using it means that clients can get updates almost as soon as they happen on the server without polling. Polling is generally inefficient and wasteful of both network resources and server processing. It’s up to the client to work out when they poll and you can never really tell when or if a client is going to get the update.

Server push is really easy to configure – if your client can make a web request to your server, then you can use this technique without installing any fancy software or opening any unexpected ports, it just works.

The trick with Server Push is to have a very efficient server. Traditional web servers would start up a thread for each and every connection. Whilst this works when you’ve got very limited simultaneous requests, it doesn’t scale up very well when dealing with lots of requests that, for most of their lives, aren’t doing anything.

Frameworks like python’s Twisted, ruby’s EventMachine and javascript’s Node.Js have shown a way where this can be really efficient – don’t spawn threads, just process them using events. So we know it’s possible – are there any other ways to do it?

I’ve been looking at the problem using .Net’s HttpListener, which is actually quite efficient. It’s essentially the same as http.sys, but in managed code. Web requests come in through a call to GetContext – but no extra thread is created, you just get an object representing the request, and you respond to it as you see fit.

It’s quite simple then, to put the requests that come in straight onto a list and wait for something to happen to your data. When that something happens, simply iterate over the list sending the updates to the clients. You don’t have to reply from the same thread that accepted the request – indeed there’s no need to reply immediately at all.

It’s a beautifully simple principle and works really well. I’ve had 135k active connections on my test machine and it doesn’t even break a sweat. Once the connections are made, the only resource used is RAM, and those 135k connections only used about 800mb. No handles, threads or cpu are consumed on the server, so there’s no fixed upper limit that I can find.

What’s even better, is that clients will appear to wait forever for the results. I’ve had tests running for >12hrs, then receiving the response and carrying on like nothing was amiss.

It’s also worth noting that HttpListener can be a really good server in it’s own right, by caching the replies sent down to each client, I’ve been able to consistently handle 150k requests a minute on a quad core server over 100mb/s network connection.

There’s a few things I’ve found that are worth noting:

Server
Accept connections as quickly as possible

In essence this means getting to the next call to GetContext as quickly as possible. There are two methods you can use for this – have a single loop that continuously calls GetContext and farms out the actual work in some way, or use BeginGetContext, and in the callback method, call BeginGetContext again immediately, before you process your current request.
Remember that List is backed by an array.
If you’re adding hundreds of thousands of items to it in a time critical context (such as within a lock block) it gets really inefficient as it copies the data each time it grows the backing array. You can either pre-allocate if you know roughly how many items will be in it, or just use a LinkedList – which has constant time adds and removes – which is what you’ll likely be doing most of the time anyway. List is really bad if you start paging, since it’s essentially copying that data at disk speed.
Don’t block the list
The list is central to this processing, and every lock on it reduces your ability to service requests. An example is in sending out the replies – it may not take long to loop over 100k objects in a list and call BeginWrite on each one, but it takes a few seconds. It’s much better to take your lock (which isn’t on the list itself), copy the list reference and create a new list in its place. Then you can iterate through your list of subscribers at leisure without hampering the server’s ability to accept new requests. If you’ve got a lot of subscribers, those clients that you serviced at the beginning of the list will already be queueing up for the next update by the time you get through your list, so this is far from academic.
Don’t throw that exception
Time really is of the essence, whether you’re getting an entry onto a list or sending the data out, it needs to be quick. Throwing exceptions is slow. So even though it’s more clunky, use that TryParse instead of Parse and structure your logic so that you’re not throwing trivial failures into the exception handler for mere coding convenience.
Cache Everything
This setup can be really efficient if the same data is being sent to everyone. In this case, you can render the data you send to one client into a byte array and keep it, so that every subsequent client receives a copy of that same byte array.
Use a versioned REST style interface
This is just a suggestion, but use the url to locate what resource you want, and have the client send a query string stating what version they’ve got. If your current version doesn’t match or the client doesn’t provide a version, just send back the latest version to the client without queueing anything. If the version does match, add it to the subscriber list for when the next version becomes available. It goes without saying that you need to synchronise this so that no-one gets left behind.
Use the background threads
I’m a little more cautious about this advice, but it’s worked for me in my testing – using the asynchronous calls for everything does appear to work really nicely. One note of caution – the thread count can go up alarmingly once things start going wrong (actually, I’ve only seen this on the client, not the server). It’s only a temporary situation, however, once the issue is resolved the threadpool sorts itself out nicely.
Have a plan for dropped subscribers
In my proof of concept I didn’t really care, but in reality, the subscribers can disconnect. Clearly, you don’t want to keep these in your list any longer than necessary. I haven’t yet found a way of cleanly and reliably detecting a failed connection using the HttpListener classes, although I haven’t looked too hard. It might be the case that the best you can do is return an empty response and ask them to reconnect. Whilst this might not sound any better than polling, it’s still fairly lightweight, responsive, and the server can dynamically determine what it needs to do, modifying these ‘poll periods’ according to load.
Client

I’ve actually found it much harder to write a client that makes 50k requests than a server that accepts 100k requests.

Know your http limits
By default, only 2 http concurrent connections are made – per machine to any particular server. Use kb282402 to fix this if you’re using a wininet based client, and set ServicePointManager.DefaultConnectionLimit if you’re using .Net’s WebRequest class
Know your ports
Each request to the server requires an open, bound port on the client. There are only ~65k available for each ip-address. So if you want >65k connections from one machine, you’ll have to have multiple ip addresses on the network and bind your requests to them explicitly. Also, windows varies how many ports are available for this use. My developer machine allowed everything >1024 for this, whereas Vista/2008 and above only use ports >49124. This limits you to about 16k outbound connections of any kind. Use kb929851 to configure this. Also keep in mind that ports don’t become available just because the connection closed. Ports can stay unavailable for 4 minutes whilst they mop-up any late server packets. This can be reduced by the OS if there are a lot of ports in this state, but it can bite you if you’re trying to recycle your 50,000 connections.
Know your handles
With .Net’s WebRequest class, and probably with any outbound network connection, a handle is created for each one. Windows XP limited the number of open user handles to 10,000, in windows 2008R2 I’ve not had any trouble when running 50,000 connections, but it’s something else to be aware of.
Any server can be flooded, live with it
A server can only accept so many connections a second. The network stack can itself queue up the backlog of connections, but the size of the backlog is limited (the best documentation I can find says between 5 & 200). Beyond this, the network stack will reject the connections without your server code ever knowing that they were there. This is the main reason why it’s so important to accept those connections quickly. But even so, if you’re making 50,000 asynchronous connections to a server, from 4 clients, at the same time for a total of 200k connections, that could all take place in a few seconds. No server can handle that, you will get errors, and you must have a way of recognizing them, handling them gracefully and re-trying them.

I wonder how many people can make use of this technique, how many people are using this, and what other issues there are with this. Has anyone found a different solution for the problem?

December 2, 2010

std::vector indexes have a type – and it’s not “int”

Filed under: Development — Tags: , — rcomian @ 12:51 pm

Just a quick aside, I’ve seen a fair bit of code recently of this kind:

std::vector<int> myvector = …
for (int i = 0; i < myvector.size(); ++i)

This code produces a warning when compiled, and it’s right to do so.

Vectors are awkward beasts – this means that they fit in with the rest of c++. In particular, the type of the variable “i” above, is not an int, unsigned int or a dword, it’s std::vector<int>::size_type.

This size type is also the numeric value accepted by the [] operator. Int will pass into this operator without warning, but it’s not correct. And going the other way – especially via myvector.size(), is a potential issue.

So the for loop above should be written:
std::vector<int> myvector = …
for (std::vector<int>::size_type i = 0; i < myvector.size(); ++i)

Or, even better, use iterators!

See also:
http://www.cplusplus.com/reference/stl/vector/

November 18, 2010

A Journey Through Source Control. Part 3. Thinking the Wrong Way Round

Filed under: Development — rcomian @ 1:32 pm

It’s time for a short rant. Something I’ve seen a lot when working with fellow developers who haven’t been formally introduced to a tool from the 3rd generation is a set of practices that just leave a bad taste in my mouth.

First off, a 2nd generation tool would generally make mapping the repository to the hard-disk very difficult, so people would check out the entire repository onto their disks to work on. This was fine at the time, but lead to a lot of problems. One of them was that no-one was ever isolated from each other’s changes – if you ‘got latest’ there was a very good chance that some of the shared code would be broken in some subtle way. Do that a few times and you quickly get wary of the practice, and end up just getting the minimum you can get away with at any point in time.

But if we look at the 3rd generation, things are a bit different. We’ve got specific, well defined versions that cut across all our files. Now, I don’t know about you, but to me, it feels much better to pull out a single card from a deck (hopefully the top one), work on it, then put the changes back as a new card on the top of the deck again.

If someone was to take that card, then build it, assuming that the build works we *know* that that version is good … and it will be forever. If I want to make another change, I can take that card, in its entirety, with impunity, and *know* – with 100% certainty that it will work. This means that when I make changes, if they work, then the card I put back on the pile will work as well. And we can guarantee that by having an automated system take that card and test it, giving it a green tick if it’s ok.

Unfortunately, most 3rd generation tools let you mix and match cards, so the practice of getting the least you can get away with still kind of works with them. But look – I’ve got a stack of shiny cards here, and each card has been carefully crafted to guarantee that it’s good and has been verified by automatic tools that this is the case. But if you have half of your files from one card, a third from another and the rest from a half a dozen other cards – exactly what are you working with? How can you guarantee anything at all? How can you be sure that the card you’re about to put on the top of the pile will work with everyone else, when you’ve never tried it yourself? (You can’t have tried it – you don’t have it). It feels about as messy and taking a dozen playing cards and making a new card out of them by cutting them up and taping them together.

Of course, no-one’s perfect. Mistakes will get into the system. But if I take a bad card, and make my changes to it, the card I put back will also be bad – and I won’t know if it’s my fault or something that was already there. This means that we need to identify bad cards as soon as possible, and fix them immediately, so that we can be sure if we find a bad card with our name against it, the problem really was ours. The fix is just a new card, (since we can’t change the old cards), but the fact that we’ve spotted a bad card and fixed it quickly means that the next cards that get added to the system can be sure of a good stable base to work from.

One of the things I like most about this system is that if the worst comes to the worst, we can always scrap what we’ve got, get the latest known good card, and work with that. Instant cleanup is always available. We can also pull any particular card out and see if it had a particular problem, and if it did (and the previous card didn’t) we have a good start at working out exactly what went wrong and a shortlist of what files might be involved.

So we’ve not a got a mental model of what our 3rd generation system is doing. We’re working on a single card at a time and building up our stack of changes, verifying each one as we go. Next we’ll look in a bit more depth at how branches work, what they mean and how to work with them.

November 17, 2010

A Journey Through Source Control. Part 2. Jumping to the 3rd Generation

Filed under: Development — rcomian @ 1:27 pm

Ok, I admit it, most people just don’t care about source control, and why should they? It’s a tool that they use to get things done, and so long as they don’t lose their work, who cares about anything else?

I think it’s because of this disinterested mindset that a lot of people completely missed what the 3rd generation of tools were about. And it’s not too suprising, advertising like “ACID compliant transactions – never partially commit a set of changes again!” aren’t exactly hinting at the revolutionary upheaval that had happened on the server side, it just sounds like CVS in a database.

But what happened was that the history in the repositories was completely pivoted around. CVS, VSS, et al would keep a single tree of files, and each file would have it’s own history. SVN, TFS, etc, don’t keep a set of files, they keep a set of trees. Each revision in the repository represents the state of every file in the entire source control tree at that point in time. Think of a stack of playing cards, each with the entire source control tree drawn on it. Each time you commit a change to any file in the repository, an new card is drawn up with the previous tree + your changes and put on the top of the pile. This means that the entire source control in the repository can be reliably retrieved by giving just one number – just tell the system what card you want and you can get it.

No longer do we need hacks like ‘labels’ to make sure that everything is kosher and retrievable, everything is labelled with each and every change as an automatic by-product of how the repository works. Of course, we don’t lose the ability to view the changes that were made to a particular file: given a path, we can diff the contents of the file in one revision with the contents in the next and get the changes just like we could before. But we also get a brand new ability – by getting the difference between one tree and the previous tree, we can see all the files that were changed at the same time – and it’s this set of diffs that makes up a ‘changeset’.

It’s this natural creation and handling of changesets that gives us a much better ability to migrate changes around the repository. Say I have two identical copies of a file in two places in the repository. If I make a change to file A, I can retrieve the changeset at any time I like, this results in a diff, and I can apply this diff to file B whenever I feel like it. Expand this concept to directories and work with changesets instead of individual diffs, and we’ve got the basis for a very flexible and simple branching and merging system. Branching is simply making a copy of a directory in another location, merging is simply getting the changesets that were applied to one directory, and applying them to the other. This branching and merging can all be tracked within the repository itself and voila, we have a modern 3rd generation system with full branching support.

So now we know what the differences between the 2nd & 3rd generation are, we’ll address in the next post some of the pitfalls that we fall into when we try to think of a 3rd generation system in terms of a 2nd generation system.

And don’t worry, I’ve not forgotten about the 4th generation, that’s coming, but since it builds on what the 3rd generation gives us, I think it’s important to square that away first.

November 16, 2010

A Journey Through Source Control. Part 1. Generations of Source Control

Filed under: Development — rcomian @ 1:27 pm

In this series I’m going to be looking at some of the key concepts available in source control, and how they can be applied to gain full control over your source code.

Now, in all fairness, I don’t make my living by telling people how to use source control, but it’s one of the things that fascinate me about software development, purely because it’s so underused and misunderstood by a large percentage of my fellow developers.

The first thing that I think is important to grasp is that we’ve already churned through several generations of source control tools. I can easily identify 4 generations based on just the surface architectures involved, and each one represents a significant advantage over the previous generations.

Source Control Generations
Generation Feature Example
1 History of individual files SCCS/RCS
2 Networked server CVS/VSS
3 History of whole trees SVN/TFS
4 Distributed repositories Git/Hg

So with the first generation of tools such as RCS, we had the ability to keep the history of the files we were working on. That was pretty much it, really. Working with multiple files was tricky at best, and sharing wasn’t exactly safe, as everyone would work in the same shared folder.

CVS & VSS came along and rescued us from this situation by wrapping RCS and keeping all it’s files in a repository, which could be accessed through a server on the network. This meant that all developers could have their own copy of their files on their machines, isolated from everyone else. Then when work was complete, the changes could be sent up to the server and new changes brought down.

This CVS style working is all very well, but it still has problems – namely that files are still versioned individually and that it can be very hard to reliably isolate changes that happened together across a range of files after the fact. SVN and others came on to the scene to rectify this by keeping all the revisions in a single stack of changes, numbered sequentially. Changes to multiple files could be kept within a single revision, or changeset, and you could see not only what changes were made to a particular file, but drill down into each change to find out what the full story was each time. This also makes branching and merging a far more tenable idea since a complete ‘changeset’ could be reliably migrated in full across different branches.

And most recently, after having 3 generations of centralised repositories, Git, Hg and others threw away the need for an enforced central repository. The effect of this is that it no-longer matters exactly where source code is located – what matters is the actual content itself, which could come from anywhere. Whether I pulled revision ’24g4g2′ out of the main company server, a junior developer’s laptop, a pirate website or the smoking remains of a tape in the server room, I would be guaranteed that the code was exactly what I was after, including all the history up to that point. Central servers are now used at your team’s convenience, not just because the tool demands one.

So there we go, a very short, and incomplete, potted history of source control tools.

Next we’ll look in more detail at the jump from 2nd generation to 3rd generation – and how so many people are thinking of it the wrong way round.

April 7, 2010

Straightening out the development cycle

Filed under: Development — Tags: , , — rcomian @ 12:04 pm

This is me trying to get things sorted in my head.

Know your features

Features are your lifeblood. Organise them, manage them, combine them into stories. Process them with great care. Tuck them in and sing them to sleep at night.

All features are great. It’s the organisation of them that gives you the edge. A small set of coherent stories that each stand on their own will be far more useful than a complex network of micro abilities. A well managed, complex set of micro-abilities will be far more use than a vague idea that something might be good.

Don’t go mapless

It’s usually a good idea to have a high level idea of what the overall solution can look like. This lets you communicate the overall idea, discuss how things can be done and ensure that the whole idea is indeed possible. This map may include all the normal good things, asynchronous message manipulation manifolds, distributed data denegrators, partial parallel processors et al.

Keep the map light

Rather than follow the waterfall model of expanding this map until it’s a thousand page document, use a lightweight method to convey the idea. If that’s a block diagram on the back of an envelope scrawled in a coffee shop, or a powerpoint presentation, that’s fine, but convey the whole idea.

Implement features, not the map

Ignore the map. Pluck a single feature from your carefully tended garden. Build the feature as if it was the only thing that existed in the world. Don’t look to the other features to see what they need, look just at what you’re working on right now. Don’t look at the map and decide that you need the main message manipulation manifold if a simple web service will be more than sufficient for this feature. If you’re working with a single, simple document, save it in a file. Don’t add that database just because it’s on the map, follow what the feature needs.

Finish the feature

When a feature is complete, it looks like you knew what you were doing when you started writing it. It looks like you had the same thing in mind all the way through, and weren’t flip flopping from one way of doing things to another. Of course, you didn’t know what you were doing when you started, and you did flip flop all over the place. But don’t let that show on the finished product.

Lock in the feature

Have a way to automatically confirm that the feature works. Confirm this on many levels, from the outside (feature tests) to the actual code (unit tests) and inbetween (integration tests). Once a feature is locked in place, you can rest in a comfort that you know it’s not going anywhere, it’s not going to be broken and you can leave it alone and work on other things. It’s very hard to properly lock in a feature after it’s finished, so start locking it in before you start the feature. Put a lock in place for each part of the feature, then put that part in place.

The locks are part of your feature too, shoddy locks do a shoddy job of locking your feature in place.

Check your locks

Make sure you know when locks are broken. Run continuous builds on all branches of your code. Keep stats: build time, unit test code coverage, static analysis, build warnings, anything that lets you see trends in how the software is progressing. Check them.

Rearrange from the ground up

After you’ve added a new feature, the whole solution should look finished. As you add new features, don’t just bolt new code on top of the old, but make it look like the whole solution was written with the intention to have all the features that are available from the start. No code is sacred, no code is untouchable.

Lean on your locks to ensure that you’re not losing functionality or breaking a feature. Lose any locks that aren’t needed any more.

Keep learning

Use what works, find what doesn’t. This doesn’t need any special discipline, it’s what you’re bitching about as you’re writing your code. Your product has a lot to teach you. Learn it. Change things.

Older Posts »

Blog at WordPress.com.