Custom software – Introduction

Custom software is software that is usually done for one client, one environment and limited user base. It’s specific use case means, it won’t be sold more than once, which translates to high enough price to cover development costs. And then some.

So why do companies decide to order and pay for custom software over and over again? Custom software has one big advantage over regular off-the-shelf one. It is adaptable to whims and fancies of a customer. Hence, the name custom. Now, here is a public secret. Companies, specially wealthy ones, dig adaptable. You see, where smaller companies don’t have (many) defined processes, big companies are set on tightly defined processes, usually certified by ISO 9001 or similar standard. Off-the-shelf solution is just not an option, as changing a process would cost a lot of money, time and resources.

As a developer with a career in building custom applications, I have done and seen my share of mistakes and fallen into several pitfalls. This series is here to help you (and also future me) avoid them as best as possible.

In part I of the series, I will be blabbering about project architecture and discover last year snow in monolith vs microservices debate. We will cover how to start, how to continue and how to take care of the accumulated tech debt.

Part II will be focused entirely on how to integrate your software with third party software. There is always something to integrate with in custom software business and it is better to be prepared.

Part III will touch the topic of what happens when your custom software gets resold to another and another and another customer. Main focus will be on how to handle new requirements and hopefully not mess up existing and future deployments.

Last but not least, a disclaimer. This series is made of findings I gathered in my career of building custom software. As many things in software, it definitely does not present the only right way. In some cases you might also find it completely wrong. Good. Let me know.

API pr0n

I doubt that anyone serious about tech could avoid hearing at least a little bit about NPM “disaster” in recent weeks. Some say, it is developers ego, that caused entire ecosystem to crash down. Others blame NPM for caving in to capital instead of open source. But the problem lies elsewhere.

In recent years, a phenomena started to occur in software development cycles, that I like to call API porn. It is caused by a fact that services, such as npm, bower, nuget and such, make it as easy to add APIs to projects as it is snapping your fingers. Don’t get me wrong, I love code reuse, but this has crossed every possible edge of reason.

Let me explain why. One of most challenging and difficult things for developer to do is an API. Why? It needs to be open just enough that another developer can use it, but not in irregular way. It needs to capture every single edge case, just for the fact that a memory leak does not make you reboot your app every 15 minutes. Or, that your application does not crash, when your users are entering unicode characters. Talking from experience, this takes a lot of thinking, tweaking, debugging and testing.

In that retrospect, there are more than 500.000 APIs on NuGet alone and I am guessing NPM can beat that without a sweat. Half a million APIs.  Have developers became so much better at writing APIs or is it that we publish just about everything nowadays? As much as I would like to believe it is former, quantity wins.

Back to NPMs little problem. The package that did most “damage” was left-pad package. Let that sink in for a moment or two. Left-pad package is nothing fancy (no offense to the author). It is a simple JavaScript function that adds custom padding in front of a given string.  Yes, that is correct. We have come up so far, that people actually think, it is better to include 3rd party left padding extension than it is to write your own. Specially, when it will take about 35 minutes, inlcuding a lunch break. And this same people are now blaming developer and NPM when their projects don’t compile/deploy? Somehow reminds me of people that copy and paste first StackOverflow solution (without reading) and then complain it does not work.

This must stop. Now. I know, that package managers open entire world of possibilities, but “with great power comes great responsibility” and as it was obvious from NPM crash, most developers just cannot be trusted with it.

Windows 10 changes Slovenian locale (at last)

If you are a proud developer of software sold internationally or in Slovenia, you might pay a little attention to this. It has not been noted anywhere of importance and I discovered it by chance when one of the programs I am working on crashed.

Versions older than Windows 10 had the following Slovenian date and time format: “d.MM.yyyy”. As far as this can be understood from computing point of view, it is in conflict with Slovenian language rules.

So, in Windows 10, someone made a bold move and fixed said format to: “d. MM. yyyy”. Finally, one might say. Except, every possible application, relying on previous false date format, will now crash. This is specially true in C++ Ole objects, where calling COleDateTime::ParseDateTime with OS set language now fails, if string representation of date is in “d.MM.yyyy” format and OS primary language is Slovenian. Let me state here, that .NET handles date formatting without problems.

If you are targeting Windows 10 clients only, this is not a problem, per-se. You just need to make sure that there are spaces after each dot in date time string representation. If you are targeting multiple OS versions, you are cooked as older Windows, do not recognize new date time format.

I am still searching for least painful workaround, that would work on any Windows OS client. If anyone has an idea, you are more than welcome to share.

Quick tip: Cache busting in ASP.NET revisited

Anyone that tried to cache static files eventually got to a point where cache caused more problems than it solved. After all, telling all your users to press Ctrl + Refresh in their browsers is not exactly how one should do things on the web. Two years ago Mads Kristensen presented us with a solution in his article Cache busting in ASP.NET. The solution uses Fingerprint class, that basically updates cache object every time a static file is changed.

All fine and well. So why do I jab about it now? The solution, in my opinion has two glitches:

1. It is tightly bound with URL Rewrite IIS module.

2. It always links static content to root URL.

So, without further ado, I present you with “upgraded” solution that avoids both issues and works with relative URLs (relative to application URL anyway). In-page usage remains the same.

This code is also available as gist.

Know thy tools

A few years ago, I wrote a post on why I don’t trust 3rd party APIs. It has never been more true than today (in days of NuGet, npm and other package managers) when adding an API is a matter of seconds. Seriously, you don’t even have to break a sweat. Package managers and web searchers got you covered. We live in the days where even “evil” Microsoft decided to move to the good side of the force and went open-source. Right? APIs get updated more promptly than ever and they are available for just about anything. And code reuse saves you a lot of time. So why am I still skeptic about it?

[Read the rest of this entry…]

Memory leaks in .NET?

In olden times, when I did C and C++ development, memory leaks were a common thing. From time to time I still wake up from nightmares in which I debug, trace and try to locate and plug memory leaks (of course, as this is nightmare, I fail to do either, or it wouldn’t be a nightmare). Thankfully, in unmanaged code, memory leaks can be spotted in a debugger (or a plain old segmentation fault) and if you are paying attention, you can plug the leak right after you made it.

Then came Java and .NET, JIT compilers and a great thing called garbage collector. And we were told, we can forget about memory allocation, deallocation and pointers as a whole. We now have a garbage collector that will do this for us. And it does, if you know how to use it. But now, we became so reliant on garbage collector, that we just assume it will do everything for us. These days, it seems developers think that just because application is written in .NET it will somehow obtain immunity to memory leaks.

Boy do I have news for you! It won’t. Just last week, I spent most of my time debugging, tracing and removing a memory leak that caused otherwise stable windows service to throw OutOfMemory exception in 90 minutes. As .NET code is managed, tracing the leak gets tricky and the fact that code was not mine only made it worse.

If you write a one-off trivial application, memory leaks are nothing to worry about, as garbage collector will release the memory, when application is closed. And since application runs only short period of time, unless you do something completely moronic, you are fine and users might not even notice.

But, as soon as you start writing serious applications and applications that have to run 24/7 (e.g. Windows Service), things change. Any serious application has to assume, that, if object is not disposed correctly, it will accumulate in memory until the application is closed or it crashes. Preferably on Friday at 3pm when you are about to head for your well deserved vacation.

So what can you do? Personally, I stick to one simple rule that removes a risk of good portion of possible memory leaks. Any object that implements IDisposable interface has to be either encapsulated in using statement or disposed explicitly. Preferably in finally statement of try-catch block. It also helps to go the extra mile and remove reference to objects used as soon as possible and thus allow garbage collector to do its job properly.

And our memory leak? With great cooperation from our customer and some good fortune, we obtained enough information to locate the leak. It was caused by opening a Stream object about 1 million times without ever closing it.

Quick tip: Extending column which is also (part of) primary key

At work, I got a short task of extending a varchar column in our database from 25 chars length to 50. The task seems easy enough. But, as things usually turn out, it contained a GOTCHA! The column was also a part of primary key on that table. Thus, I started to wonder. Can you resize a column that is part of primary key without recreating primary key itself?

I did some empirical experiment on MSSQL and Oracle RDBMS and came to a conclusion. You actually can just extend the length of a primary key column and primary key will auto-adjust.

Also, in case you are wondering, the same applies to all possible constraints on the table that include said column.

Absolute minimum you need to know about storing passwords

In 2013 Forbes reported that approximately 30,000 sites across the globe are hacked each day. Yet, for some odd reason, in 2015, I still see web applications that employ questionable password storing mechanisms e.g. storing passwords in plain text or just hashing them with md5. With 30,000+ sites hacked a day, there is a strong chance that your site will be hacked next.

[Read the rest of this entry…]

Mobile apps development: Part I – Environments

Recently, I started to learn how to do mobile apps. Instead of classic ToDo app, that everyone is so fond of, I decided I will do a bit more realistic application. It would include, for starters a first time initialization, that would run (what a surprise!) at first run of installed app. This initialization would check connection, register account on my server (using web api) and use SQLite as local persistent storage.

[Read the rest of this entry…]

Quick tip: Do proper exception handling

This might be a bit of a d’oh, but if you decide to handle exception, handle it properly. What do I mean by saying “properly”? Have you ever seen code like this?

This code is stupid. Exceptions are a sign of exceptional or unpredicted behaviour. Thus, they should be treated with all seriousness. The code above works fine, if no exception is thrown. However, we live in unpredictable world. Internet connection may die. Hard drive may crash. Computer can run out of memory. And, in the case above, as soon as exception is thrown, neither you nor user will have the slightest clue of what went wrong.

There are two ways to solve this:

  1. Re-throw exception using throw; statement and handle it later on or
  2. Handle it in catch part of statement.

 

I understand that sometimes you just don’t see the need to handle an exception, but the least you can do is write the exception to log file.