Software Boundaries

I’ve given a lot of thought to the notion of boundaries in software lately, ever since I watched this excellent talk titled “Boundaries” by Gary Bernhardt at Ruby Conf 12 a few months back. It spurred in me a new appreciation for boundaries in software, particularly as it relates to design and testability, imperative versus functional approaches, etc.

Largely for amusement, but also as an act of gentle self reinforcement, I reread Erik Dietrich’s colorful blog post titled “Visualization Mnemonics for Software Principles” that is a great overview of SOLID principles and Law of Demeter. It struck me that there is a common thread across all of these principles: boundaries.

Each principle more or less establishes a boundary, and instructs how it should be respected.

First, there’s the Law of Demeter. It basically states that you shouldn’t hand over more information to a method or component than is necessary for that method or component to function. The invocation of that method or component is the boundary, and at that boundary, provide only what is necessary.

This continues with the SOLID principles.

The Single Responsibility Principle, or SRP, is pretty self explanatory. A component/class/function/whatever should only do one thing. This essentially promotes composability, where you can assemble a larger thing that does many things from smaller, singular pieces. When SRP is violated, boundaries between responsibilities is blurred.

The Open/Closed Principle states that components should be open for extension, closed for modification. It’s describing a boundary. Here’s this component that may or may not have multiple responsibilities, but you aren’t permitted to meddle with those responsibilities directly. Instead, a specific interface — a boundary — is provided that allows you to alter the overall behavior by extension, preserving the default behaviors.

The Liskov/Substitution Principle, which is pretty specific to OOP, says that all derived types should be able to act as stand-ins for their ancestors. When this principle is violated, you end up with a derived object that only appears to be like all the others. It’s a boundary-within-a-boundary, wherein the imposter derivatives are disrespecting the boundary its ancestors have established.

The Interface Segregation Principle favors smaller, more digestible interfaces instead of larger, heavier ones. In a way, it’s just applying SRP to interfaces. It also has a Law-of-Demeter feel to it, given that smaller interfaces require less overall definition to be satisfied. This principle is reinforcing boundaries between responsibilities.

The Dependency Inversion Principle, which is practically at odds with encapsulation, calls for components to code against abstractions rather than the concrete. It forces a boundary where perhaps there previously was none. Instead of a component taking responsibility for instantiating dependencies, there’s a boundary where, abstractly, that dependency can be supplied, or “injected.”

Another boundary-oriented principle that is familiar to many, but is not a member of the SOLID elite, is Don’t Repeat Yourself, or DRY. It is, I think, oft misunderstood, as it is applied literally by squashing code duplication. But, it can and should be applied more generally to concepts. By consolidating a concept into a single place, be it a component or function, you’re establishing a firm boundary around it. When a concept is scattered about, the boundary is once again blurred.

These widely accepted principles are hardly orthogonal; they are bound by boundary.


“What” is Agile?

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

That’s the manifesto. An agile organization recognizes and values these basic principles.

The use of the word over is intentional and deliberate. It doesn’t say instead of or not. The stuff on the right is important, but the stuff on the left is more important. The recognition of this preference is the quintessential definition of agile.

What’s also intentional is the absence of instruction and ceremony. Nowhere in the manifesto, for example, does it say that in order to produce working software, you must have a daily scrum. Scrum is merely an inconsequential process. A person doesn’t become a master carpenter by simply clutching a hammer, and neither is a team agile if it adopts scrum (or any other software development process or tool).

Put differently, the manifesto doesn’t pontificate on the how — there’s nothing imperative about agile. There’s a clear separation of concerns: the manifesto declares the what, and the team implements the how.

When the how satisfies the what, the team is an agile one. The manifesto is not a list of instructions; it’s the acceptance criteria.


A quick blurb about automated/unit testing

I know many of us place value on unit tests. But, like most things in life, benefits bring costs along for the ride. It’s a routine exercise in prudence.

I think unit tests can be valuable. I also think they can be very expensive. And it’s very easy for the costs to surpass the benefits if we’re not careful.

I’m especially fond of Gerard Meszaros’ perspective on automated testing. In his presentation, he makes the argument that tests deserve the same level of craftsmanship and care as our production code, and provides some excellent techniques to distill these tests such that they lower the costs associated with writing and maintaining them.


The Unit Skeptic

I’ve been spending a lot of time lately thinking about unit tests. There’s a divide that exists between developers who champion their use, and those who are skeptical. I fall firmly into the former camp, and while I’m trying to understand why the other camp is as large as I perceive it to be, I also might understand why.

A portion of the skeptics have actually tried it, but had a poor experience. The tests they wrote broke often. The tests also were long and complex, and when they broke, they took a long time to repair (or were simply deleted). This experience is familiar, and mine was no exception.

As soon as I thought I understood the value proposition, I immediately started to write tests against an existing codebase because I wanted what the tests promised. (First mistake.) I installed NUnit, created some test fixtures, compiled … I was off to the races! From there, the intuitive thing to do was to mimic the path of execution I expected a given class to take. (Second mistake.) Does that sound like an integration or service-level test? You bet, but everything compiled and all of the tests passed, so I had no reason to believe I was doing anything wrong.

I understood that there was to be an element of isolation to these tests, and the classes I was testing had multiple dependencies and cross-cutting statics. TypeMock to the rescue! (Third mistake.) Writing mocks sucked the life out of me. Writing mocks is, I just, ugh. Graphing out the order of each method call and hardcoding each return value on an external module is one of the least exciting things one can do with a computer.

A couple of weeks later, I had a small suite of passing “unit” tests! It felt good. Until they started to break.

As soon as I refactored code, tests broke. I could understand if it was a class that I had written tests for, but the mocks? Changes to the object I was mocking also broke the mocks — go figure! Ugh, and they were such a pain to write. This wasn’t supposed to happen. Both grokking the broken tests (and their mocks) and fixing them was no fun. Tests were supposed to encourage refactoring, not discourage it. Test maintenance was becoming expensive and rapidly not providing any tangible benefit. As more tests broke, more tests were deleted.

I took a break from unit tests. But, you couldn’t read a blog or listen to a podcast without someone plugging Test-driven development (TDD) and all of its greatness. Despite my skepticism, it was hard to ignore that there was something different about my experience versus TDD: when the tests are written.

With TDD, you write tests while you’re writing your code. (Purists would have you writing the tests first, but I’m still not there yet. Baby steps.) Determined not to miss out on something great, I gave it another go, but this time with a feature that hadn’t yet been written. I established a cadence of writing a small piece of code (e.g., a method) and then writing tests for it. I discovered pretty quickly that my tests were influencing my architecture in positive ways. For example, I started parameterizing context. If my production method needed to know the current time, I’d pass it in as a parameter versus making a hardcoded call to DateTime.Now. Then, my unit test could provide a fixed context. Before I knew it I was using full-blown dependency injection to parameterize context. Not only was this better architecture, but the tests were so much easier to write. (And, no mocks ftw!)

This second pass at unit testing was a far more rewarding experience. In addition to tests taking less time to write and having a positive architectural influence, they were also less brittle. Since this experience, I’ve unit tested nearly every new chunk of code I’ve written, with zero regret.

That’s all well and good for new stuff, but how about that legacy code — the stuff that many newcomers are tempted to write tests for? My first attempt at this was an epic failure. I haven’t given up, but clearly it is something that takes more work than just authoring a few test fixtures. And I won’t attempt it again until after I’ve read Working Effectively with Legacy Code, because I understand that Michael Feathers imparts lots of wisdom in this space. (It’s on my short list, I’m hoping to absorb it in the coming weeks.)

Having gone through the exercise of journaling my past experience with unit testing, I suppose I can understand some of the skepticism among those who’ve tried it. But, when the movement first began, there was a lot less guidance available. It’s also important to understand that a unit testing framework is just like any other tool in that it can be used in any number of ways, including ways in which it was never intended.


Finding the Shortest Path

I’ve known about Dijkstra’s algorithm for a long time, but never took the time to review it and then try to implement it on my own to prove whether I really understood the concept. Until today. I stumbled upon Eoin Bailey’s explanation of Dijkstra’s algorithm, and found it to be quite helpful.

As I was reviewing the algorithm, it struck me that I could probably use a min heap in order to keep track of which node to visit next. Fortunately, a few months ago I wrote a series of C# extension methods to “heapify” a list in exactly the same way that heapq does for Python. It was incomplete (and still is), but enough of the methods were in place that I could make use of it.

I ran into a few bugs, particularly when a longer path was calculated. It turns out the incomplete min heap had a few bugs in it. Once those were ironed out, the algorithm implementation seemed to work flawlessly.

My Dijkstra’s algorithm implementation is contained in my slowly growing DataStructures project on GitHub, if you’re interested in taking a peek.


GUID for Javascript

This morning, I needed to be able to generate a GUID in Javascript. Like any developer, I hit up the interwebs for some help, and landed here. (Gotta love Stack Overflow.)

Anyway, I ended up going with this implementation, submitted by broofa.

function newGuid(){
    return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
        var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
        return v.toString(16);
    });
}

It’s exactly what I was looking for, all nice and tidy. Happy coding!


Transforming and Validating XML with Python and lxml

XML isn’t nearly as sexy as JSON these days, but it’s still out there in the wild. And it is powerful. For example, it’s pretty awesome that you can assemble an XSL transform to parse XML and turn it into newly formatted XML. It’s also pretty awesome that you can verify XML against a schema to ensure the XML meets all requirements (say, for example, that an ID be unique across all instances) — that the XML is “valid.”

If you are a front-end developer, chances are that you make a series of HTTP requests and receive data — it’s a pretty common thing. For the purposes of this post, we’ll assume that data is XML. But, there’s a problem: the XML is not using the tags you need for your application. So, you apply an XSL transform. Your application makes many assumptions about the format of this massaged data, so you employ a schema or XSD to validate each assumption.

There’s also a pretty good chance that the folks maintaining these services want to tinker. So it would be immensely helpful to be able to quickly test out each URL to be confident that changes made to services won’t negatively affect your application. It would be wise to structure these as actual unit tests, but that is beyond the scope of my focus here.

Commence the tool-making! This seemed like a perfect candidate for Python, so I hopped to it. After some googling, I quickly got the impression that lxml was the perfect library for the job, able to handle both XML transforms and XSD validation. It couldn’t have been easier to work with.

I whipped up a Python script to read URLs from a designated text file, iterate over each one, hit the URL, transform the XML, validate the XML, and write any validation errors to a log file. Pretty straight-forward, and I can now validate all of my URLs at a moment’s notice, and have a full report generated in seconds.

Below is my script:


from __future__ import print_function
import argparse
import lxml.etree as ET
import urllib
parser = argparse.ArgumentParser(description="Transforms and Validates XML")
parser.add_argument("–feeds", help="the text file that holds the urls to be validated", default="feeds.txt")
parser.add_argument("–log", help="the output log file that reports all errors", default="log.txt")
parser.add_argument("-v", "–verbose", help="increase output verbosity", action="store_true")
args = parser.parse_args()
def getDOM(feedUrl):
# hit the url (fetch the xml), then parse it into a dom object
return ET.parse(urllib.urlopen(feedUrl))
def getXslt(feedUrl):
# examine the feedUrl, determine which XSLT to use
if "myspecialfeeds/rest/abc" in feedUrl: return "xslts/Abc.xslt"
if "myspecialfeeds/rest/def" in feedUrl: return "xslts/Def.xslt"
if "myspecialfeeds/rest/ghi" in feedUrl: return "xslts/Ghi.xslt"
return
def applyXslt(dom, xsltName):
# parse the xsl transform into an object, then apply it to the incoming dom
transform = ET.XSLT(ET.parse(xsltName))
return transform(dom)
def getXsd(feedUrl):
# examine the feedUrl, determine which XSD to use
if "myspecialfeeds/rest/abc" in feedUrl: return "xsds/Abc.xsd"
if "myspecialfeeds/rest/def" in feedUrl: return "xsds/Def.xsd"
if "myspecialfeeds/rest/ghi" in feedUrl: return "xsds/Ghi.xsd"
return
def applyXsd(transformedXml, xsdName):
# parse the xsd into an object, then validate the incoming dom against it
xmlschema = ET.XMLSchema(ET.parse(xsdName))
xmlschema.assertValid(transformedXml)
return
with open(args.log, "a") as logFile:
with open(args.feeds, "r") as feedsFile:
print()
for feedLine in feedsFile:
# strip out the line terminator
feedUrl = feedLine.rstrip("\n")
# if url is commented out using '#', skip line
if feedUrl.startswith("#"): continue
# if the line is empty, skip line
if not feedUrl: continue
print("Validating: " + feedUrl)
try:
# fetch xml from feed via url
dom = getDOM(feedUrl)
# only print the xml if verbose logging is enabled
if args.verbose: print(ET.tostring(dom, pretty_print=True))
# determine which xsl transform to apply
xslt = getXslt(feedUrl)
# apply the xsl transform
transformed_xml = applyXslt(dom, xslt)
# only print the transformed xml if verbose logging is enabled
if args.verbose: print(ET.tostring(transformed_xml, pretty_print=True))
# determine which xsd to validate transformed xml against
xsd = getXsd(feedUrl)
# validate the xml against the xsd
applyXsd(transformed_xml, xsd)
print("Feed is valid.")
except Exception, e:
print("Feed is INVALID! (See log for details.)")
print("Invalid feed URL: " + feedUrl, file=logFile)
print(e, file=logFile)
if args.verbose:
print("Feed URL returned the following XML:", file=logFile)
print(ET.tostring(dom, pretty_print=True), file=logFile)
print("Feed URL XML transformed to:", file=logFile)
print(ET.tostring(transformed_xml, pretty_print=True), file=logFile)
print("", file=logFile)
print()


More Python Fun

It’s no secret that, recently, I’ve been teaching myself Python. A couple of weeks ago, I wrote a Python script to convert a CSV file to an XML file, and that wet my appetite for more.

Earlier today, I discovered Anaconda from Continuum Analytics, which comes with IPython Notebook. Not only is it a really nice tool for learning Python, but you can also plot points! This would have made Calculus way more fun 15 years ago!

At any rate, I started fooling around with some basic list slicing, list comprehension and the functional favorites: filter, map and reduce. IPython Notebook made this incredibly simple. Wanting to tackle something a bit more complicated, I sought out a coding interview problem.

The problem is such that you’re provided an initial collection of integers, and you are to produce a sum of the highest, non-adjacent integers in the collection. It sounds challenging, but when you break it up into smaller pieces, it’s pretty trivial.

I started by building a min heap of the original collection such that I could pop off the largest values in order. A max heap is technically more appropriate, but the Python heapq module that turns a list into a heap only supports min. As for the values themselves, I simply inverted them by multiplying each by -1.

The index of each item is also critical in determining whether adjacent items have already been applied toward the sum. So instead of pushing the raw value onto the heap, I pushed a tuple containing the value and its index.

With the heap fully constructed, the next thing needed was some way of keeping track of which items were used toward the sum. I chose the simple solution of creating a list of boolean values, each initialized to False, such that when an item at the same index is used toward the sum, its value is changed to True.

While popping items off the heap, each item’s neighbors are examined to determine whether it’s a candidate for the sum. If it is, its value is added to a final list, from which a sum can easily be reduced.

Here’s the full script:


from __future__ import print_function
import heapq
# construct initial collection
initial_col = [1, 5, 3, 9, 4]
# construct heap
heap = []
# assemble the heap, store tuples as such: (original_value, original_index)
for index, value in enumerate(initial_col):
heapq.heappush(heap, (value * -1, index))
# keeps record of used indexes (a list of booleans, each initialized to False)
used_indexes = map(lambda x: False, initial_col)
# will hold all highest, non-adjacent nums
highest_nums = []
# empty the heap, populate list that holds highest, non-adjacent nums
while (heap):
t = heapq.heappop(heap)
left_idx = t[1] – 1
right_idx = t[1] + 1
if left_idx > -1 and left_idx < len(used_indexes) and used_indexes[left_idx] == True: continue
if right_idx > -1 and right_idx < len(used_indexes) and used_indexes[right_idx] == True: continue
used_indexes[t[1]] = True
highest_nums.append(t[0]*-1)
# calculate the total (i know about sum(), but reduce() w/ lambda was more fun)
total = reduce(lambda x, y: x + y, highest_nums)
print("total: %d" % total)

Could this problem be solved other ways, either by reducing allocations or increasing speed? Quite possibly, but remember, this was just an exercise to flex my new Python muscles.


Boy Meets Python

Last week I needed a quick solution to convert a CSV file to an XML file, and because C# is my primary language, I was able to throw this together in less than 10 minutes:


string csvPath = "x";
string xmlPath = "y";
using (StreamReader streamReader = new StreamReader(new FileStream(csvPath, FileMode.Open)))
{
using (StreamWriter streamWriter = new StreamWriter(new FileStream(xmlPath.Text, FileMode.Create)))
{
var headers = streamReader.ReadLine().Trim().Split(','); // first line is header information
string line;
streamWriter.WriteLine("<root>");
while ((line = streamReader.ReadLine()) != null)
{
streamWriter.Write("<row");
var rowCols = line.Trim().Split(',');
for (int i = 0; i < headers.Count(); ++i)
{
streamWriter.Write(string.Format(" {0}=\"{1}\"",headers[i], rowCols[i]));
}
streamWriter.WriteLine("/>");
}
streamWriter.WriteLine("</root>");
}
}

view raw

CSVtoXML.cs

hosted with ❤ by GitHub

So what does this have to do with Python? Well, this weekend, I had the sudden urge to learn some Python. I wanted to build something that a.) would force me to learn a few things about the language, and b.) had value. The CSV to XML converter was fresh in my mind, and so I thought it would be a great way to begin my Python journey.

To start, I installed Python on Windows. I downloaded the installer from http://www.python.org/getit/, and was writing Python in just a few minutes. Pretty painless.

Writing Python was slightly awkward at first, but I quickly got the hang of things. Having taken the time to learn LINQ and lambda expressions a few years ago certainly helped.

Command line arguments were a breeze using argparse. Within minutes I had a way to specify the CSV input file and the XML output file. It isn’t absolutely necessary, but argparse makes specifying expected parameters easy, and comes prepackaged with --help. Nice.

Next, I stumbled upon csv, which was certainly helpful. But, again, I’m pretty sure I could have survived without it, treating the input file as a standard text file and reading one line at a time.

A long time ago I got into the habit of encapsulating file I/O with using() in C#. It felt awkward acquiring a file handle and having to call close() on it explicitly, but once I discovered Python’s with keyword, I felt right at home.

The rest of the script, which is really the meat of the conversion, required me to learn a little bit about lists and strings. I’m an avid user of string.Format(...) in C# and was happy to see that I could call format(...) in Python.

I began by reading in the first line, which always contains the headers. I wanted to form a string format something to the effect of <row col0="{0}" col1="{1}"/> that I could use when processing each subsequent line. I discovered the join() method on the string, and thought that might allow me to dynamically assemble the attributes. Calling join() on the string " " and passing to it a collection of strings generated by an iterator that iterates over the headers cleverly assembles the format string — in one line of code! (I felt pretty stupid when I realized that the string in C# also has this feature.)

The last remaining piece was processing each line of the CSV file. This was trivial once I generated a format string, with one exception. For each line, I thought I could call format() on the format string, pass in the list of values from the line, and write the newly constructed string to the file. The problem was, format() is expecting comma-delimited parameters, and I was holding onto a list of values as strings. Simply passing the reference to the list, line, was not sufficient. To my surprise, I discovered that I could essentially dereference the list (as such: *line), satisfying format().

And that completed the exercise! I won’t admit how long it took me to write, but let’s just say it took longer than 10 minutes.

Below is the script:


import argparse
import csv
parser = argparse.ArgumentParser(description="Converts a CSV file to an XML file")
parser.add_argument("csv", help="the path to the .CSV file")
parser.add_argument("xml", help="the path to the .XML file")
parser.add_argument("–root", help="root tag name", default="root")
parser.add_argument("–row", help="row tag name", default="row")
parser.add_argument("-v", "–verbose", help="increase output verbosity", action="store_true")
args = parser.parse_args()
if args.verbose:
print("Beginning CSV to XML conversion …")
with open(args.csv, "r") as csvFile:
reader = csv.reader(csvFile)
headers = next(reader)
# column0="{0}" column1="{1}" column2="{2}"
attributesFormat = " ".join("{0}=\"{{{1}}}\"".format(val, idx) for idx, val in enumerate(headers))
# "<row …/>"
rowFormat = "<{0} {1}/>".format(args.row, attributesFormat)
with open(args.xml, "w") as xmlFile:
print("<{0}>".format(args.root), file=xmlFile) # <root> node
for line in reader:
print(rowFormat.format(*line), file=xmlFile) # <row/> node(s)
print("</{0}>".format(args.root), file=xmlFile) # </root> node
if args.verbose:
print("CSV to XML conversion complete.")

view raw

csvtoxml.py

hosted with ❤ by GitHub

(Having spent the time to set up the row format in Python, I thought I should go back and use the same approach in C# , complete with using Join(), for a more apples-to-apples comparison.)


string csvPath = "…";
string xmlPath = "…";
using (StreamReader streamReader = new StreamReader(new FileStream(csvPath, FileMode.Open)))
{
// snag headers
string[] headers = streamReader.ReadLine().Split(',');
// col0="{0}" col1="{1}" coln="{n}"
string attributesFormat = string.Join(" ", headers.Select((colStr, colIdx) => string.Format("{0}=\"{{{1}}}\"", colStr, colIdx)));
// "<row … />"
string rowFormat = string.Format("<row {0}/>", attributesFormat);
using (StreamWriter streamWriter = new StreamWriter(new FileStream(xmlPath, FileMode.Create)))
{
streamWriter.WriteLine("<root>");
string line;
while ((line = streamReader.ReadLine()) != null)
{
streamWriter.WriteLine(string.Format(rowFormat, line.Split(',')));
}
streamWriter.WriteLine("</root>");
}
}

view raw

CSVtoXML.cs

hosted with ❤ by GitHub

The mere fact that I did all of this on Windows felt slightly sacrilegious, so I decided to go back and conduct the same exercise, this time on Linux — Ubuntu 13.04 to be exact.

Ubuntu ships with Python installed, so technically there were even fewer steps to get started. But, it ships with v2.7.4, and the script I wrote on Windows apparently uses language features that didn’t exist until v3.x. So, I grabbed Python 3.3.2 for Linux from http://www.python.org/getit/, and followed these excellent instructions so that I could have both v2.7.4 and v3.3.2 installed simultaneously. Once installed, the script I wrote on Windows ran equally well on Linux.

It was clear during this exercise that I merely scratched the surface with Python. It appears to have quite an exhaustive API, contains many of the same constructs that I’m used to in C#, and I will not hesitate to use it for all of my future scripting needs.


A date with JSON

I don’t work with JSON every day. In fact, I hadn’t used it at all until the beginning of this year, when I made REST calls to Twitter and retrieved gobs of tweets as JSON.

I’m now working on a project that contains collections of immutable C# objects, and those objects need to make their way to ActionScript. Given that ActionScript is based on ECMAScript, it seems appropriate to serialize these objects as JSON so that ActionScript might easily consume them.

During my Twitter tinkering, I was using an older version of the .NET runtime, and I had no other choice but to rely on third party libraries for JSON support, lest I roll my own. This time, I have the latest and greatest at my fingertips, and I decided to take it for a test drive.

JavaScriptSerializer started off well, for the most part. I could easily serialize any object with a single line of code:

var myObj = new MyObject(...);

var jsonText = new JavaScriptSerializer().Serialize(myObj);

It’s simple, and perfectly innocent. Deserializing, however, proved to be slightly more difficult.

var deserializedObj = new JavaScriptSerializer().Deserialize<MyObject>(jsonText);

This would have worked, but the Deserialize() method depends on the existence of a default constructor, and invokes each property setter individually. That’s fine if you’re working with mutable objects, but for concurrency concerns, I insisted my objects be immutable.

The overload of Serialize() produced the same results.

var deserializedObj = new JavaScriptSerializer().Deserialize(jsonText, typeof(MyObject));

There was one last method on the JavaScriptSerializer class that had some potential: DeserializeObject().

var objGraph = new JavaScriptSerializer().DeserializeObject(jsonText);

DeserializeObject() returned a dictionary of objects keyed by string. I added a constructor to MyObject specifically to consume it. This worked, but I wasn’t pleased with having to add a separate constructor, and I wondered what I might do if types didn’t match up properly.

I continued to capture my assumptions as unit tests, and everything seemed to be working decently … until I hit a DateTime object. I would serialize a DateTime, and it would deserialize as a DateTime four hours ahead. Something was clearly awry.

A quick google search landed me at Scott Hanselman’s post from earlier this year in which he exposed JSON’s poor support of dates, and pointed out that Json.NET does a much, much better job.

I’ve used Json.NET in the past, and so with confidence, I fired up NuGet, downloaded Json.NET, and within the span of about five minutes, was able to produce this:

var jsonText = JsonConvert.SerializeObject(myObj);

var deserializedObj = JsonConvert.DeserializeObject<MyObject>(jsonText);

And, voila! Notice how I’m providing the type? Json.NET is intelligent enough to invoke my constructor with the proper values rather than just relying on a default constructor and invoking each property.

The conclusion? I spent the better part of a day trying to work around the shortcomings of JavaScriptSerializer, and Json.NET solved all of my problems in minutes. Microsoft, take note!