What I learned last week… uh… months

It has been a long time, since I have written a post. Reasons vary. Most of it is down to my laziness and limitations to my spare time. Some of it is down to lack of motivation as well.

Anyway, during several last months I have, surprisingly, learned many new things. I limited my pick to the following items:

  1. You cannot set Prefer 32-bit option to class library in .NET
  2. ORACLE RDBMS column names must not exceed 30 characters
  3. People, suggesting that copy & paste for VPN connections must be disabled, should be “taken care of”
  4. No matter what the task is, you must take your time to solve it
  5. CSRF feature, known as warning SG0016 is annoying, if you are implementing public API
  6. How to use query string parameters in non-RESTful API
  7. FastDirectoryEnumerator!
  8. When using integration to move some data, always use separate table

Now to details.


You cannot set Prefer 32-bit option class library in .NET

The setting can be located in Project properties -> Build, but it is disabled for class libraries. First of all, as per this StackOverflow article, the only difference between selecting “x86”  as platform target and using “Prefer 32-bit” option is that application compiled for “x86” will fail on ARM based environment, while application compiled for “Any CPU” with “Prefer 32-bit” selected will not. My reasoning is that as executable projects are meant to define the architecture for entire application, this setting would have no meaning in class libraries. Hence, it is disabled.


ORACLE RDBMS column names must not exceed 30 characters

Really. But only, if you are running version 12.1 or lower. Otherwise, you can use names up to 128 characters. We found that out the hard way, while migrating MSSQL database to ORACLE platform. Anyway, you can find out what length your column and table names can be, by running following statment in your SQL client:


People, suggesting that copy & paste for VPN connections must be disabled, should be “taken care of”

The title says it all really. Disabling copy & paste option over VPN connection might have some security benefits and I am pretty sure that some auditor can’t sleep, if it is not, but it is annoying as hell for anybody that actually tries to use VPN connection for REAL work. Imagine you have to prepare a report for a customer that requires you to run a 300 line long SQL statement. Obviously, you are not developing that in their environment. You are doing it in your local database. Now, you just need to somehow get it to the customer system. Copy & paste seems harmless enough. Yeah, not going to happen. So now, you need a Dropbox (best case scenario) or, mailing that SQL to customer’s admin and hoping that person knows what he/she is doing. Not to mention the awkward situation, when you find out you forgot to add just one more column or condition to your SQL statement.

Kudos to all auditors, recommending ban of copy & paste. NOT.


No matter what the task is, you must take your time to solve it

Sounds reasonable enough. Right? Except, when you are bogged down with work, and now a trivial, but urgent task comes in, forcing you to drop everything and focus on that specific task. Hah, but the task is trivial. What could possibly go wrong? Well, for starters the fact that assumption is a mother of all clusterfucks (pardon my French). So, now, you solved the task half-arsed, passed it back to customer, only to let it hit you right back on your head 30 minutes later. Instead of doing it properly, the first time, you will have to do it the second and hopefully not the third time, taking even more of the time you didn’t have in the first place. Meanwhile, your reputation with your customer is sinking faster than RMS Titanic.

Even in times of stress and distraught, it is important to remember, that each and every task is worth your attention. If naught else, it will save you minutes, if not hours and leave your reputation intact.


CSRF feature, known as warning SG0016 is annoying, if you are implementing public API

“New” Visual Studio 2017 comes with abundance of new features. One of them is giving you security recommendations that behave as warnings. Roslyn Security Guard it is called. All fine and dandy. Sadly, though, most of those recommendations are useful only, if you are developing an internal applications. If you are building, let’s say, public Web API, you really don’t want to hear about that CSRF SG0016 warning telling you to validate anti-foregery token. Specially, as all requests are coming from other servers and you have no way to validate that token.

There is a workaround to add

just below class declaration, which suppresses the warning until you do this

I would have still preferred a project option to disable that, though.


How to use query string parameters in non-RESTful API

I had to connect to 3rd party non-restful API, that invented all sorts of parameter passing options. From classic JSON for POST requests to combination of route parameters and query string parameters. As I had no access to the API from my development environment, I created a mock API and had to mimic the original API’s behavior.

For route parameters, you simply define a route that knows how to handle them, like so:

If you want to obtain parameter from query string though, you must define [FromUri] in front of it in method declaration:



A quick task. You need to move 10.000 files from one folder to another.

Solution 1

Use Directory.GetFiles to get a list of all files in directory and then use File.Copy to move them to another location.

Problem with this solution, however, is that although it works fast, it will store all file names into a string array, thus hogging your memory resources like crazy.

Solution 2

Use Directory.EnumerateFiles to get a list of all files in directory and then use File.Copy to move them to another location.

Much better solution as it returns files as IEnumerable<string> which allows you to traverse files before all are loaded.


Now imagine that source or destination or both for files that need transfer are on network drive. In that case, first solution will take around 30 seconds to read all files. Second will not fare much better, getting all files read in about 25 seconds. And this on a fast network drive.

Introducing FastDirectoryEnumerator for next solution.

Solution 3

Using FastDirectoryEnumerator.EnumerateFiles, it read 10.000 files in about 20 miliseconds. Yes, that is right. Miliseconds.

You can check documentation and implementation on CodeProject site. The secret is, apparently in not doing a round-trip to the network for each and every file. That and using kernel32.dll.


When using integration to move some data, always use separate table

Another project of mine has a bug. Yet to be decided, if it is human or code, but in any case, code should prevent such situations.

This is what happens. The code moves some data from table ITEMS via 3rd party web service to their product. This is done by a column named STATUS in the table ITEMS, which must hold a certain value. The code sets status to “moved to 3rd party service”, prior completion and to “error” in case of execution errors. Upon completion a 3rd party code is written into another field (let’s call it EXT_ID).

Unfortunately, web interface for adding and editing items also uses STATUS field for document workflow. Meaning, it sets status on certain actions.

Lately, this started to happen. An item gets picked, status is set to “moved to 3rd party service” and transfer completes and sets EXT_ID. During this process someone with item opened in browser clicks on “Confirm” button again in web interface and sets status back to “pending for transfer”. Action also removes EXT_ID. As 3rd party service checks for duplicates, it returns a duplication error.

To avoid this, a way better solution would be to create a table ITEMS_TRANSFER. The row would be added to this table (with hash of values), when transfer would be requested and removed (or marked as removed) when transfer completes. This would certainly prevent duplication errors.

What I learned last week at work #3

In a 3 day week, I only managed to learn how to get distinct IP addresses from log file.

How to get distinct IP addresses from log file

For a customer of ours, I had to screen two years of log files and find distinct IP addresses for certain criteria. You could check those log files by hand. Sure, it would take a month or two, but it can be done. However, if you are not keen of spending your day looking at log files line by line, here is what you can do:

  1. You can grep log files for specified criteria:
  2. Then you can parse results to get all IP addresses:
  3. You can then use awk, to print them on separate lines:
  4. And again use awk, to print only distinct ones:
  5. Optionally, you can store output to file:

Ideally, you want to run this in one command:

There you have it! File _ip_addresses.log now contains only distinct IP addresses.

I am pretty sure, it can be done differently. You can leave your solution in comments below.

What I learned last week at work #2

It’s been a quiet week at work. Fixing a bug here and there, implementing minor features, writing some documentation etc etc. Hence, this weeks findings are not programming related.

Without further ado, here is what I learned last week:

  • Windows 10 app restart on unexpected shutdown (or after update restart) cannot be disabled;
  • Solving ‘PkgMgr.exe is deprecated’ error.

Now to details.


Windows 10 app restart on unexpected shutdown cannot be disabled

Since Fall Creators update Windows 10 gained an interesting feature. Much like OS X, it restores your applications upon unexpected shutdown or maintenance restarts. Now, I bet this feature sounds great on paper and I bet it is perfect for your everyday user. However, the feature is totally useless and annoying to anyone, doing something more with his/hers computer, besides browsing the internet and watching occasional X rated movie.

Imagine this. At the point of maintenance restart (updates have finished installing), I have 7 Visual Studios 2012 in administrator mode, 5 Visual Studios 2010 (again in administrator mode), 6 Microsoft SQL Management studios, a Notepad++, Outlook, 3 Word documents and 5 Excel worksheets open. I am not even going to count remote desktop sessions and other minor software windows. Now, computer does reboot, it comes back and and I am presented with login prompt. After typing my password 3 times (seriously, I need another password), OS starts loading all windows mentioned above. Except, it opens all Visual studios in normal mode and without opened solutions (thanks for that, btw). Same goes for MS SQL Management Studios. It opens 6 instances, not one having an active connection or at least a correct SQL instance selected. Useless and annoying.

To top it all off, apparently, this feature cannot be turned off and no upgrade to make this available is scheduled to this point.

Solving ‘PkgMgr.exe is deprecated’ error

After a server came crashing, we had to set up a new one. After completed install of server roles and features and our applications, I tried running some of them and got Service unavailable error. I tried to register .NET by issuing

command. This returned another error PkgMgr.exe is deprecated. Quick googling found this page, that explains the cause for the error is missing ASP.NET installation. I went back to server installation and selected ASP.NET 3.5. That solved the problem.

What I learned last week at work

I am a firm believer of a fact, that if you are not learning anything new at your work, it is time to move out of that comfort zone, pack your bags and find a gig where you will. Lately, my work shifted and consists of 99% maintenance grunt work and 1% of actual new development. In that kind of situation, a person can easily forget, that despite chewing the dog food, there is an occasional pickle here and there. So, I created this series. To remind myself, that I am still learning something new and to, hopefully, provide some extra value to whomever stumbles to this place.

So, these are the things I learned in past week:

  1. The verb INTO is not necessary when running INSERT SQL statements on Microsoft SQL Server;
  2. Direct cast of column value of System.Data.DataRow object in .NET 1.1 does not work anymore on Windows Server 2012 and Windows 10;
  3. How to compare strings with fault tolerance;

Now to details.


The verb INTO is not necessary when running INSERT SQL statements on Microsoft SQL Server

Debugging for some odd mishap, I have located the following piece of code:

According to SQL standard, verb insert should be followed by verb into. Except it wasn’t. I thought that this has got to be some obsolete code that no-one uses. I’ve checked references and found a few. So that wasn’t it. The code obviously worked, as it exists since 2012. So what the hell?! Well, it turns out, that even though the verb into is mandatory by standard, most implementations (Microsoft SQL server included) ignore this and keep it as optional. I am definitely not adopting this, but it certainly is interesting.


Direct cast of System.Data.DataRow column value in .NET 1.1 does not work anymore on Windows Server 2012 and Windows 10

Yes, I know. Microsoft stopped supporting .NET 1.1 framework with Windows 7. Still, we have some projects that run (or more accurately ran) properly even on newer Windows OS. Except that with every update to Windows 10 and Server 2012 it is more and more obvious that .NET 1.1 is getting pushed out.

The latest thing was an InvalidCastException when executing this statement:

where row is of type System.Data.DataRow. One would think that value is not integer, but in this case it was 103, which by my books, is an integer. Interestingly enough, this works:

Go figure.


How to compare strings with fault tolerance

In one of our projects, searching by peoples name and surname just wasn’t good enough. Spelling mistakes and different characters in place for unicode ones were supposed to be taken into account.

After 5 minutes of “googling”, I found a StackOverflow answer that suggested using Damerau-Levenshtein distance algorithm. Levenshtein’s distance algorithm provides a way to calculate number of edits that need to be done on one string to get another. Damerau-Levenshtein algorithm is an upgrade that also allows characters to be transposed.

However, this is just the first step. Algorithm provides you with a number of edits. To use it, you still need to define a threshold of how many mistakes will you allow. Fixed values are just not good, if your string length varies. So, I used half of the length of either search query or provided value. It works like a charm.

Quick tip: Optimizing repeating try-catch-finally statement

Lately, I’ve started noticing a pattern in data layer of one of our projects at work. The pattern looks like this:

This repeats itself in just about every data layer method. Lines and lines of useless, repeating code for which I am also to take a lot of blame. So I thought: “There must be a better way than this.”

And there is. I created this method in data layer base class:

This enables me to now change every data layer method to look like this:

This solution has a small issue though. If you are doing insert or update, you might not want to return anything. As you cannot return void, just define returning type to be object and return null. I am prepared to live with this.


Web developer: Why 2017 feels exactly like 1997?

I don’t know how many of you, dear readers, remember still how it was like being a web developer in late 90s? You know, the time when not every kid knew how to do web-sites. The time of Geocities, Angelfire and Lycos. The time without Google (well, nearly). The time before cross-browser javascript frameworks and no real support for CSS2. These were the times when 5% of your work presented doing the actual web site and 95% of time was spent on tweaking HTML, CSS and JavaScript to actually work in Netscape 4 and IE 5. And in the end, you somehow always ended doing layout with tables in tables in tables… Yeah. You didn’t want to be THAT guy.

But, with web becoming “the thing” and web sites started to blossom, we got Netscape …uhm… 4 and IE6 and CSS2 support improved (yeah, right) and first semi-useful cross-browser library came to life. It was called cross-browser and surprisingly enough it still online. As funny as it looks today, this was the first library where you didn’t have to pay attention to browser specifics. It gave us at least a glimmer of hope that future is going to be better and bright…

Fast-forward 20 years. Internet Explorer and Netscape are a thing of the past. Chrome, Firefox, Edge and Safari browsers are now in. We have full CSS2 support (well, very nearly) and so many cross-browser javascript libraries that we can’t even name them. Yet, working on my side project TimeLoggerLive, I started to wonder. Is it really that different? I mean, sure, new technologies are out (HTML5, CSS3, Angluar7000 etc.), but has things actually changed for web developers?

CSS3 initial release was in 1996 and HTML5 standard was in preparation since late 2000s. Yet, all these new browsers that we switched to because of promised support for “the latest and greatest” standards still don’t support either in full. Worse even. Like back in 1990s, each and every browser implements the standard differently. You can imagine the confusion.

In case of TimeLoggerLive, my kryptonite is a property contenteditable. The property’s functionality is awesome. By setting it’s value to true, you are supposed to be able to edit content of any HTML tag, provided the property is set on that tag. Handy. Except, it does not work on all tags in IE and Edge browsers, Firefox has some strange behavior, if you use it on empty cell and Chrome, which offers the best implementation of it, for some odd reason, distorts column width.

I checked one of my favourite pages CanIUse.com and it is marked as full support across all browsers, but Opera mini. However, there is a “known issues” section, where it is explained that IE browser does not support contenteditable property in following tags: TABLE, COL, COLGROUP, TBODY, TD, TFOOT, TH, THEAD, and TR. To avoid this, one needs to implement a DIV tag into each table cell. Groovy. Except, when you implement DIV tag, suddenly all browsers start showing border around editable content, which leads to more nasty CSS hacks.

Yes. It feels exactly as 1997.

Quick tip: Setting Oracle client collation

This week one of our clients experienced an interesting problem. Data obtained from ORACLE database did not display unicode characters. They were either replaced by ‘?’ or some other character.

This happens for one of two reasons (or in worst case scneario both). Either your database has wrong collation or your ORACLE client does. The former is a bit difficult to fix, as you will need to change database collation and existing data. The later is a bit easier. Here is how you do it:

ORACLE client 8.x

ORACLE client 11.x

I had to set Slovenian WIN1250 ecnoding and this is what sample does. More languages and options can be found in ORACLE documentation here and here.

Failed to load resources from file. Please check setup

Not so long ago an application written in .NET 1.1 started to pop this error up and about. Funniest thing though, only Windows 10 clients with Creators update installed were affected. Now, we could argue, why there is still an application written in .NET 1.1 and running, but that could be a lengthy debate in which I really don’t want to go into right now. Or ever.

Anyway. The error, as descriptive that it is, means only one thing: somewhere in your code, there is a StackOverflowException. In case you are wondering, no, event logger won’t detect a thing. After much trial and error, I have narrowed the problem down to this chunk of code:

Method GetValueEx returns a response of type object. In this particular case, it should have been a string, but as there are no hits in the database, it returns null. So, basically, the line 3 of method GetValue should have thrown a NullReferenceException, which catch statement should have caught. Except it doesn’t.

I don’t have enough information to explain all details, but on Windows 10 Creators update line 3 throws StackOverflowException, which is for some odd reason not handled by try-catch block. And this causes “Failed to load resources from file. Please check setup” error.

Knowing this, I modified my code to:

Needless to say, the fix works without a glitch. Being a good Samaritan, I have also posted the answer to this StackOverflow question.

TimeLoggerLive early-bird pre-order

There has been a lot said and written about how people should log time they spend on tasks. Some claim you should log only the time you actually worked on a task, some that you should log all time, including intrusions, lunch breaks etc. And from the project standpoint, I agree with later option. However, when it is you, who needs to track where your time went, you are faced up with a difficult task.

I guess you could use time logging features of your project management tool, but that is usually tedious and time consuming. Not to mention unpractical.

You could use one of the thousand apps that are out there, that require you to just press start button when you start timing the task, and stop button, when you stop doing it. But these usually come up with results in form of 2 hours and 33 minutes when you really wanted to log 2 hours and 30 minutes. This leads to editing and even more time lost. Also, all applications I have seen and tested, require you to enter tasks first, which is in my books double work. Specially when we use project management tool.

Personally, I use pen and paper. Archaic and non-environment friendly. It works, but it has a serious issue. In my line of work, I do a lot of context switching and at the end of the day, I spend some time just summing up time spent for tasks. It is not particularly time consuming, but it is tedious and error prone.

So, I have this idea of a time logging web application, that would be as simple as logging time on paper. Just start time, end time and description of your work. Description would have type-ahead of already entered descriptions, enabling easier summation of your daily time consumption. And you would be able to get nice condensed report for each day, week and month.

In future editions, integration with popular project management tools like JIRA, Trello, Trac is imminent. But for now, I aim for simple and get things done principle.

With all said and written, today, I can proudly announce TimeLoggerLive early-bird pre-order is available. As an early bird, you are entitled to:

  • minimum 30% lower subscription price for first 3 years (1st year at 80% off),
  • access to all development and future versions of the application,
  • hassle-free, any-time money back guarantee,
  • personalized and friendly support.


And the best part is, even if you decide to leave TimeLoggerLive, we will keep your data (unless otherwise requested) in read-only form, available to you online, if you ever need it again.

For companies, the product will also feature creation of teams and overview of their logged activities, bundling tasks into meaningful projects and user management.

TimeLoggerLive is currently in development and is expected to go live November 1st, 2017. I expect first beta to be done by August 1st, 2017 at the latest. By registering today, you will help TimeLoggerLive become awesome. And personally, I look forward to have you as a customer.

Make it pink

There is always interesting time, when new project is on the board. This time it was a mobile application and me and my coworkers were tossing ideas left and right. So a discussion got a turn to what color scheme should user interface use. Mostly out of fun (and some out of envy), I claimed: “Pink. It should be pink.”

Back then, I thought nothing of it. The statement was meant as a joke and the color was picked based on what would be most inappropriate for this project. Coworkers took it as a joke as well. As a (more and more annoying) joke I threw at them every time, the discussion turned to UI for some reason. But, as things go, the project was ready to be presented to prospective customers. Naturally and understandably, sales pushed for customer company brand color scheme for each demo. Although coworkers did a wonderful job on the project, it still took some fiddly last-minute work to sort the new color scheme and base it on a theme.

At that point, I started figuring out, that as dumb, as my “idea” was, it would be beneficial to the project, if the UI really did use pink color. Reasoning is pretty trivial. When creating a project from scratch, there is usually so much work with basic mechanics, feature implementation and testing, that UI implementation is almost an afterthought. Hence, to solve glitches as quickly as possible, we, developers, tend to use line of least possible resistance and do something stupid like specify color or font inline. I admit, I did it numerous times on other projects. I know it is wrong. Heck, I knew it back then. But I just justified it with the fact that there just wasn’t enough time.

Now, if I made one of those project pink, I am pretty sure, I would do everything in my power to not use cheap corner cutting techniques and to make darn sure that I could swap color scheme faster than you can say cookie with mouth full of, well, cookies. This applies to all other areas as well. Want to make sure that you do not have static text in your project? Use upper case text for everything. Want to write data somewhere? Store it to file on floppy instead.

Thus, the next time you are about to do something, make it pink!