Showing posts with label code. Show all posts
Showing posts with label code. Show all posts

2019-10-04

Entity Framework 6, SQL, and nullable strings

I ran into an issue that appears to be caused by Microsoft attempting to protect me from myself. Although, truth be told, it wouldn't have been an issue if things were a little better designed.

Imagine, if you will, a SQL Server database with a table of transactions. One of the fields on this table is a CorrelationId. It's a text field that is populated by a different system to tie transactions together (for example, two sides of a transfer from one customer to another). This field always gets populated on new transactions; the uncorrelated ones will just be the only one with a given CorrelationId. However, this system is not new; it was converted to replace an older system that did not have a defined CorrelationId. So, although the five million or so transactions created by this system have a CorrelationId, there are 12 million "legacy" records that have a CorrelationId of NULL.

So, say, for a given transaction, you want to find all correlated transactions. In SQL Server, you might use a simple query like this:

SELECT *
FROM dbo.TransactionTable
WHERE CorrelationId =
(SELECT CorrelationId FROM dbo.TransactionTable WHERE Id = @TransactionId)

And this would work, for the most part (except for legacy records, since SQL will fail to match on the NULL value — but we can ignore this for now). If you took this query into SQL Management Studio and looked at the execution plan, you'd see a nice thin line from the index seek on the CorrelationId, showing that it found and processed a tiny number of matching records, resulting in a very quick response.

However, if you were trying to do this programmatically from a C# application using Entity Framework 6, you might write some code like:

var query = from txn in transactionTable.Entities
where txn.Id == transactionId
join txn2 in transactionTable.Entities on txn.CorrelationId equals txn2.CorrelationId
select txn2;

The problem is, in C# code, null values are equal to another; while in SQL, "null" is considered "unknown", and doesn't equal itself. (The theory is, you can't know if one "null", or unknown value, equals another "null"; so equality tests between "null" and "null" are false.) Instead of leaving it up to the programmer to explicitly code for this condition, Entity Framework "helpfully" writes the join clause that it gives to SQL server in this manner:

JOIN [dbo].[TransactionTable] AS [Extent2] ON (
([Extent1].[CorrelationId] = [Extent2].[CorrelationId])
OR
(([Extent1].[CorrelationId] IS NULL) AND ([Extent2].[CorrelationId] IS NULL))
)

The extra check for IS NULL on both sides has two unfortunate side effects in this case:

  1. If the transaction is one of the legacy records, it will return a positive match on all 12 million other legacy records with a null CorrelationId.
  2. If the transaction has a CorrelationId, because of the IS NULL, SQL Server will investigate the 12 million null values in the CorrelationId index, resulting in a big fat line from the index seek in the execution plan, and a return time of a couple seconds or more.

The really annoying part is that there doesn't appear to be a way to stop this. Even if you explicitly add a check for a not-equal-to-null on your target table, Entity Framework still wraps the equality test with checks for IS NULL. The result is almost comical. For instance, adding txn2.CorrelationId != null either in the join statement or as a where clause, results in this (with contradictory statements highlighted):

[Extent2].[CorrelationId] IS NOT NULL
AND (
([Extent1].[CorrelationId] = [Extent2].[CorrelationId])
OR
(([Extent1].[CorrelationId] IS NULL) AND ([Extent2].[CorrelationId] IS NULL))
)

Even trying to break up the work into two statements didn't help. This code:

var corrId = from txn in transactionTable.Entities where txn.Id == transactionId select txn.CorrelationId;
var txns = from txn in transactionTable.Entities where txn.CorrelationId == corrId select txn;

Resulted in this SQL:

WHERE ([Extent1].[CorrelationId] = @p__linq__0)
OR
(([Extent1].[CorrelationId] IS NULL) AND (@p__linq__0 IS NULL))

Granted, this is a really bad situation to be in to begin with. Indexes on text fields tend to perform poorly, and having such a huge number of null values in the index is likewise unhelpful. A better design would be to rip the text field off into another table, or somehow otherwise convert it into an integer that would be easier to index (something we've had to do in other tables on this very same project, where we've had more control of the data).

I'm willing to bet that Microsoft's translation goes completely unnoticed in over 99% of the cases where it occurs. And, if I had the time to make a design change (with all of the necessary changes to all points that hit this table, some of which I don't have direct control over), it could have been resolved without fighting Entity Framework. Even just populating all of the legacy transactions' CorrelationId with random, unique garbage would've solved the problem (though with a lot of wasted storage space that would've made the infrastructure team cry).

In the end, it was solved by creating stored procedures in the datbase to do correlated transaction lookups (where the behavior could be controlled and expected), and having C# code exectue those directly (bypassing EF6) to get transaction IDs. Standard Linq queries would then use those IDs, instead of trying to search the CorrelationId.

This whole exercise was prompted by a script that I had to run to get a bunch of data from a decent number of transactions. It took nearly eleven hours to complete, finishing close to 1am after I started it. If I had time to go through this debugging and implement the fix, it turns out I could've gotten it done in about a third of the time.

2017-06-19

Pass a recordset to C# by way of XML

UPDATE: I have to discourage using this trick. For reasons I do not yet know, it doesn't seem to work with a large dataset. I do not know the exact point at which it fails, I just know that it does. I noticed that a significant number of values that should have been updated with text, actually got updated with nulls. As much as I would love to investigate this and try to see what is wrong and whether it's a failure in C#, SQL, or some combination, unfortunately, it's more important that my work actually get done; so I've had to abandon the XML route entirely.

Original post follows.


It's been a while since I've posted, well, anything. But I learned of a neat trick that I thought I'd post.

I'm currently working on a program that is converting data from two different sources into a single database. A lot of it is just done with carefully crafted SQL statements, but there are a few steps where I have to take data from one source and use some C# code to do some kind of processing before storing it in the target database. Since the data set is on the order of millions of rows, processing these records one at a time can be prohibitively time-consuming. And, since I have limited access to the SQL Server itself, using SQL CLR isn't a great option. (I probably could get the access if I needed to, but it will be an additional step to have to remember and configure when this goes to production, and the fewer moving parts I create for myself, the better.)

One of the tricks I've implemented is to use multi-threading to let the different steps run simultaneously — one thread extracts the records and puts them into a ConcurrentQueue<>, another thread processes that and puts the results into another queue, and a third thread updates the records in the database.

I've been trying to come up with ways to do the update in batches. There are ways to create a stored procedure that will take a table parameter, and ways to call that stored procedure by binding the parameter to an equivalent DataSet, but I didn't like the idea of creating a DataSet object just to pass the records in. It just seemed too "heavy" to me. (Though it might've been faster than calling a command object in a loop for records one-by-one.)

Another option was to create a VALUES table and build the command text dynamically. But, since I was working with strings, I didn't like the idea of building dynamic SQL and having to escape quotes or any other special characters that might cause SQL to choke. (Not to mention it's just bad practice, even if my code is unlikely to be used as a SQL injection vector.)

So, I came up with the idea of passing in values as an XML document. By building the XML with Linq-to-XML C# code, all necessary character escapes would be performed automatically. I could pass in as many values at once as I felt comfortable with, and let SQL do the work in a batch instead of one at a time.

To give some context to this code, I am taking email addresses that were encrypted in the source database, and converting them to their decrypted values in the target database. At this point, my queue consists of objects that have two properties: EncryptedEmail and DecryptedEmail. Earlier in my conversion work, I've simply copied the encrypted strings over into the Email field of the table, so all this method has to do is update the table and changing the Email field to its decrypted value.

var recordsToUpdate = GetRecordsToUpdateBatch(250); //Retrieves up to 250 records off of the queue at a time
if (recordsToUpdate.Any()) {

var xdoc = new XDocument(
new XElement("emails",
recordsToUpdate.Select(r => new XElement("email", new XAttribute("encrypted", r.EncryptedEmail), new XAttribute("decrypted", r.DecryptedEmail)))
)
);

using (var connection = new SqlConnection(GlobalSettings.DatabaseConnectionString)) {
await connection.OpenAsync();
using (var command = connection.CreateCommand()) {
command.CommandText = @"
WITH emails AS (
--Convert the XML document into a table that SQL can use normally SELECT Tbl.email.value('@encrypted','varchar(256)') AS Encrypted, Tbl.email.value('@decrypted','varchar(256)') AS Decrypted
FROM @emails.nodes('/emails/email') AS Tbl(email)
) UPDATE cust SET cust.Email = emails.Decrypted
FROM dbo.Customer cust
INNER JOIN emails ON cust.Email = emails.Encrypted;
"
;
command.CommandType = System.Data.CommandType.Text;
var param = command.CreateParameter();
param.ParameterName = "@emails";
param.SqlDbType = System.Data.SqlDbType.Xml;
param.Value = new SqlXml(xdoc.CreateReader());
command.Parameters.Add(param);

await command.ExecuteNonQueryAsync();
}
}

await Task.Run(() => Thread.Sleep(1));
}

2014-09-24

More MVC and JSON.Net - Exceptions

In my last episode with MVC, I was trying to reconcile MVC 5.0 with WebAPI 5.1 and the fact that the former used a broken JSON library, while the latter used the better Newtonsoft JSON library. I implemented a model binder and a value provider factory to get the two projects in sync, and all was right with the world.

As the project has gone on, we've come across instances where we've needed to call MVC controllers and get JSON responses back. We have created WebAPI controllers in some cases, but the way the project is structured, we've needed to post MVC models as JSON via AJAX back to MVC methods where we cannot reference them in the WebAPI project (otherwise we'd have a circular reference).

There are probably a few ways to restructure this to work. Ripping the view models out to a library that can be referenced by both MVC and WebAPI was one idea, for instance. But that would've been a lot of work and caused bigger deployment headaches that I don't want to get into here. (A smaller project might've gotten away with it, though.)

Returning JSON to the client is almost as easy as having your MVC controller return a JsonResult — the MVC Controller class has its own built-in Json() method to do just that. Except, of course, it uses the JSON serializer we never want to see again. In this case, I created my own JsonNetResult class (based on a StackOverflow question, naturally), and my controller just calls this.JsonNetResult(resultObj); to do the magic.

The remaining issue, though, is when it came to exceptions. While WebAPI, on a JSON request, returns an error response formatted in JSON (that jQuery's ajax method parses easily), MVC gives you that nice, big, friendly "yellow screen of death" with the error formatted in HTML (or a generic message when you get to production). Not too useful when you want your JavaScript to report back some detail about what went wrong.

The solution, here, was to use an exception filter. The JsonNetExceptionFilter class checks to see if the incoming request was specified as JSON (using very similar code to the model binder), and if so, it handles the error on its own. We were throwing HTTP error messages using two different types of objects, depending on whether the code was copied from WebAPI or MVC (since they each have their own namespaces for this kind of thing). I check for these types so that I can set the response code to something besides the default 500 (400 Bad Request is used quite a bit), and I set the content to a JsonNetResult object with the data being a message formatted in something similar to the WebAPI format (and thus parsed by the same JavaScript code).

I considered leaving it here, so that we would have to decorate every JSON method with this handler as well as specifying it returned a JsonNetResult (unless there's a way to find out programmatically in the filter whether the current controller method returns a JsonNetResult, but I failed to find that); but ultimately, I decided to just override the default error handler site-wide (since it falls back to the base class for non-JSON requests, it shouldn't be an issue). That was done by editing the MVC app's RegisterGlobalFilters method to read:

filters.add(new JsonNetExceptionFilter());

Now, whenever we want to return an exception back to a JSON request, all we have to do is throw an exception:

throw new System.Web.HttpException((int)System.Net. HttpStatusCode.BadRequest, "Bad request data.");

Processing the return message in jQuery is left as an exercise to the reader (though if you use WebAPI, you probably already know). ;)

2014-07-17

MVC, JSON, and DateTime (oh my)

It's been a while since I've had a programming issue that warrants a blog post, but here's an interesting one.
First, let me set up the situation. I have an MVC project (MVC version 5.0) that communicates to RESTful services using WebAPI (version 5.1). In one particular view, I write out an object in JSON with the intent that the client can make changes to it and POST it back to an MVC action to get an HTML table back. Here's the model:

public class PaymentScheduleRequestModel {
[JsonProperty(PropertyName = "effDt", NullValueHandling = NullValueHandling.Ignore)]
public DateTime EffectiveDate { get; set; }

[JsonProperty(PropertyName = "termId", NullValueHandling = NullValueHandling.Ignore)]
public Nullable<int> ContractTermId { get; set; }

[JsonProperty(PropertyName = "billDay", NullValueHandling = NullValueHandling.Ignore)]
public int BillingDayOfMonth { get; set; }

[JsonProperty(PropertyName = "freq", NullValueHandling = NullValueHandling.Ignore)]
public PaymentScheduleFrequencyId PaymentScheduleFrequencyId { get; set; }

[JsonProperty(PropertyName = "prem", NullValueHandling = NullValueHandling.Ignore)]
public decimal TotalPremium { get; set; }
}

(Note that the JsonProperty decorators exist because the same model is used to communicate to the WebAPI service -- we are using shorter property names to lighten the payload.)

And the MVC action that is set up to process it looks like this:

[HttpPost] public async Task<ActionResult> NewPaymentSchedule(PaymentScheduleRequestModel request) { … }

To put it in JavaScript, the view has this block of code:

window.ScriptModel = @Html.Raw(Json.Encode(new {
PaymentScheduleUrl = Url.Action("NewPaymentSchedule", "Contract"),
PaymentScheduleRequestModel = new PaymentScheduleRequestModel {
ContractTermId = Model.ContractTermId,
EffectiveDate = Model.StartDate ?? DateTime.Now,
TotalPremium = 200m
}
}));

Unfortunately, Json.Encode doesn't work well with dates. This is the output that the browser sees (with line breaks added for legibility):

window.ScriptModel={
"PaymentScheduleUrl":"/Contract/NewPaymentSchedule",
"PaymentScheduleRequestModel":{
"EffectiveDate":"\/Date(1078729200000)\/",
"ContractTermId":null,
"BillingDayOfMonth":0,
"PaymentScheduleFrequencyId":0,
"TotalPremium":200
}
};

Notice two things about System.Web.Helpers.Json.Encode's output:

  1. The property names are the .Net property names. The same code could not be used to post what should be the same model to MVC or to WebAPI.
  2. The EffectiveDate field has been converted to a Date function, enclosed in a string. Even if the Date function would reveal the correct value, the fact that it's in a string means JavaScript will not see it as a Date.

If I don't do anything with this object and just post it back to the MVC app, the resulting object does not have a valid date — all other values carry over, but the EffectiveDate property is 01/01/0001. So not only does it look odd, not only is it inconvenient in that JavaScript can't use it as-is (without picking apart the string), but it doesn't even work for round-tripping data to and from the client.

In doing some research on this topic, I came across Scott Hanselman's blog post describing the problem, and stating that the release of WebAPI won't have the issue since it will use JSON.Net (a.k.a. Newtonsoft.Json). Since I'm using the same model in an WebAPI call further downstream, I can verify that it does work as intended. It uses the JsonProperty decorators to rename the properties, and it serializes and deserializes like magic.

To solve this problem in MVC, you have to alter how it deals with JSON on the way out and on the way in.

On the way out is easy in my case, since I am manually spitting out JSON into the HTML. I just exchanged Json.Encode with Newtonsoft's serializer:

window.ScriptModel = @Html.Raw(Newtonsoft.Json.JsonConvert.SerializeObject(new {
PaymentScheduleUrl = Url.Action("NewPaymentSchedule", "Contract"),
PaymentScheduleRequestModel = new PaymentScheduleRequestModel {
ContractTermId = Model.ContractTermId,
EffectiveDate = Model.StartDate ?? DateTime.Now,
TotalPremium = 200m
}
}));

And the browser sees:

window.ScriptModel={
"PaymentScheduleUrl":"/Contract/NewPaymentSchedule",
"PaymentScheduleRequestModel":{
"effDt":"2004-03-08T00:00:00",
"billDay":0,
"freq":0,
"prem":200
}
};

This is better. Of course, the controller action doesn't understand this. It's still looking for the .Net property names, and, since they don't exist on the incoming object, all values come back empty. (In this simple example, not only is EffectiveDate 01/01/0001, but TotalPremium is 0.0.)

The trick here is to override MVC's default model binder, so that it, too, uses the Newtonsoft.Json library. It is also consistent to have MVC use a value provider factory that also uses JSON.Net.

Fortunately, people smarter than I figured out these two steps. I found a value provider factory on this blog: http://www.dalsoft.co.uk/blog/index.php/2012/01/10/asp-net-mvc-3-improved-jsonvalueproviderfactory-using-json-net/
and the important piece, the model binder, that will translate the JSON property names to their real .Net names, is detailed here: http://stackoverflow.com/questions/4164114/posting-json-data-to-asp-net-mvc

My implementation looks like this:

And this code gets called from Application_Start (I actually added it to my WebApiConfig class (which, for some reason, exists in my MVC app even though it's obviously not the same as WebAPI), since other configuration-type things were being done here):

ValueProviderFactories.Factories.Remove(ValueProviderFactories.Factories.OfType<JsonValueProviderFactory>().FirstOrDefault());
ValueProviderFactories.Factories.Add(new JsonDotNetValueProviderFactory());

ModelBinders.Binders.DefaultBinder = new JsonDotNetDefaultModelBinder();

All this because, although Microsoft updated one of their web interfaces (WebAPI) to use the JSON.Net library that works, another one of their interfaces (MVC) uses their own, broken, JSON serialization library.

2012-11-28

MVC: Any object you want, but not a string

I often refer to MVC — the Model-View-Controller pattern for building websites — as "Magic, Voodoo, & Conjuring", because a lot of things just happen by what seems like magic. Moreover, when something goes wrong, it's very difficult to find exactly where it went wrong, because the process is happening without writing any code.

I had one such example of something going wrong today. It was only after I got another pair of eyes to confirm I wasn't crazy that I found a StackOverflow question that addressed my question.

We had a JavaScript method that took a method name, a key, and an object, and posted that to an AJAX handler on the server, more or less like so:

$.ajax({
    url: "/Home/" + methodName + "?key=" + keyId,
    contentType: 'application/json',
    data: $.toJSON(dataObject),
    success: function (result) { doSomething(); },
    type: "POST"
});

This JavaScript code block was part of a plug-in that we wrote to apply to many different forms on the site, that shared common functionality. Depending on the data being processed, the dataObject could be anything from a string to a full-blown object. The controller methods all had similar signatures:

public ActionResult ProcessStringData(string key, string newData);

public ActionResult ProcessDataEntity(string key, DataEntity newData);

When the AJAX call was made, the querystring parameter key was correctly mapped to the method parameter key, and the JSON object in the request payload was magically deserialized and mapped to the newData parameter.

The problem was (as you may have guessed from the StackOverflow question), while this worked great for the object, it was completely ineffective for the string — despite the fact that the data was in the request input stream, the value would always be "null".

This is why it was such a head-scratcher. If the data was a DataEntity object and we were calling the ProcessDataEntity method, the AJAX payload would simply be:

{ prop1: "value", prop2: "value2" }

Note that nowhere in that text is the name newData, and yet the MVC framework somehow managed to interpret it as a DataEntity object and pass it as the newData parameter.

Calling the ProcessStringData method with a string as data, however, resulted in this payload:

"data value"

So why wasn't it magically treating this as a string and assigning it to the correct parameter?

Maybe the JSON deserialization magic was throwing an unseen error. Using Fiddler, I tried submitting a payload of { "data value" }. This resulted in a very visible error response from the server about an invalid JSON object. Since "data value" did not return the error, the deserializer must've been relatively fine with it.

Curiously, if I changed the payload to:

{ newData: "data value" }

it did, in fact, work. However, I didn't want to have to wrap a simple string with a named parameter object to get it to work — it would be inconsistent with the object-processing methods that did not require this massaging. (Consistency makes for much more maintainable code. Try bringing up a new developer to speed and explaining that data is in a certain format, except in a list of "special cases" that you're lucky if you remember to mention, and you discover this truth quickly.)

Looking through StackOverflow, it seems that it is possible to remove the content type to process strings. Again, this would have required determining if I was calling a string-based or object-based method and setting the content type (because I actually did want the application/json type for objects). It just didn't "feel" right.

I ended up wrapping the payload in an array. For whatever reason, it is unable to take a string payload and map it to a string parameter, but it is perfectly fine mapping a payload of the form [ "value" ] to a parameter of type List<string>.

It's just one of those things that requires some coding around, but the hard part is trying to understand why. And because it's all handled by magic, voodoo, and conjuring (a.k.a. some method buried deep inside the framework), it's not in your working code, and therefore very difficult to track down.

2012-05-24

I code with a catapult

I decided to try something this year. For my entire programming career, I have used a fixed-width font while coding. Of course, when I first started coding, fixed-width fonts were all a computer could do; and even these days, command line interfaces still predominately depend on a fixed-width font for display (especially if they try to draw boxes or other pseudo-graphical elements). Even as I type this blog post using Blogger's HTML editor, the edit window uses fixed-width characters; and if I wrap some text in <code> tags, chances are your browser will also render that text with a fixed-width font (even if Blogger didn't apply a custom style to it). But for a long time now, computers have been able to use proportional-width fonts on screen, and many development tools support picking your own font. Yet, whenever I get the inspiration to try a different typeface in my IDE, I always picked from the list of fixed-width fonts. Visual Studio seems to encourage this by making fixed-width fonts bold in the font dialog.

(Although, admittedly, it is convenient to know which fonts are fixed vs. proportional at a glance, and bolding one or the other is a simple way to show the difference.)

But I had read a handful of blog posts from developers claiming how it is easier to read code when they view it in a proportional-width font. The discussion often includes pros and cons of viewing code in a "literary" font vs. a "computer" font, and often includes a discussion of tabs vs. spaces (the reverse is also true: discussions about tabs vs. spaces often include someone commenting on the benefits of one vs. the other when using a proportional-width font instead of a fixed-width font). I decided that I wanted to try something new, and maybe I'd give this developing-in-a-proportional-width-font thing a try.

The major criterion I've found for picking a font is that it must have very distinct characters. Fonts where a number 1 looks like a capital I looks like a lower case l, where a number 0 looks like a capital O looks like a pair of (), and where (parentheses), [brackets], and {braces} are too similar (or just too "weird"), tend to cause more headaches and annoyances than their otherwise typographical good looks are worth. Also, the font should, obviously, be supported (Visual Studio only supports TrueType fonts in the IDE), and it should be legible (and similar characters distinguishable) at smallish sizes (I tend to view my code at 8 or 9 point). It's also helpful if it's one of the "standard" fonts that is available on nearly every machine (i.e., you don't have to try and find a legal, free download every time you want to use it on a new machine). One proportional-width font that fits these criteria fairly well (no font is perfect on all counts) is Trebuchet MS.

To make a long story short, I'm lovin' it. I've found it much easier on the eyes to read through code. It looks neater, cleaner, more professional, and more tidy (even if the actual code is a mess). And while I realize this "analysis" is all based on subjective, personal preferences, in my opinion a font choice when doing development is exactly that. The proportional font also helps view wider lines on the screen. Now, in an ideal world, you should never let your code lines get so wide that you need to change fonts to read them all, there are times where it is necessary to have a wide line (especially in HTML, where a browser — and I'm not just talking about IE here — behave oddly when you try to insert a line break).

Since commentary on font style is almost never complete without tabs vs. spaces, I'll add my 2¢ here. Personally, I prefer tabs, because they are more compact (one character vs. [number of spaces]), more customizable (most IDEs worth their salt let you specify the size of a "tab stop"), less prone to error (you can type three spaces instead of four, but you can't type half a tab), and easier to navigate (to back up an indent level, you only have to press the left arrow or delete key once, instead of four, six, or eight times). The arguments I've heard for spaces over tabs, I either don't agree with or I don't (personally) think matter: they are uniform across all IDEs (agreed, it is jarring to open a code file in another tool [e.g., a diff tool] and see it display tabs at a very different width), they display the code as written (arrogance; why is the way John Q. Developer aligned the code the "right" or "only" way to view that code?), and they allow for more precise aligning of things like variable declarations (irrelevant; you can tab to the proper indent and use spaces for fine alignment, if necessary). But these are just my opinions, as everyone has their own (and, as a quick scan of the internet shows, some hold to their own opinion with something approaching fanaticism or zealotry that makes it not worth arguing).

I've found that using a proportional-width font seems to work best with tabs, since it helps you keep the code consitently indented no matter which font you use. (Plus, if you prefer larger indents, you end up having to use even more spaces to get the same depth as with a fixed-width font.) However, there is one potential drawback to using a proportional-width font that neither tabs nor spaces can solve, and that's aligning of variable declarations (or other similarly-formatted code blocks). See this picture for an example:

In a proportional-width font, you can't use spaces to align the variable names with the first line. Not only would it vary depending on your choice of font, but there's no guarantee that the sum of the widths of the letters in the variable type is evenly divisible by the width of a space. (You can use tabs to align them with a tab stop, but again, that will depend on the size of your tab stop and whether the variable type name fits within one or two [or more] tab stops.) If that matters enough to you, this may be a deal-breaker; but it's one of the things I've found, in practice, doesn't matter that much when you can just indent the dependent lines like so:

If you haven't tried coding with a proportional-width font, I highly encourage you to give it a try. At best, you might like it. At worst, you can always change it back.

2011-08-16

Firefox 6 - Major version, major pain

I was testing out my web code in Firefox today, and I got a message saying that Firefox 6 had been released. Wanting to make sure I was keeping up to date with our users, I installed the update. After restarting the browser when prompted, I noticed that it looked exactly the same. So I went online to check what's new.

Apparently, not much. Oh, there's a small list, but nothing really visible, and it's not really any faster than before. It seems that Mozilla's accelerated release schedule is nothing more than releasing what would normally be a "point release" as a major version instead.

But to what end? Chrome is on version 13; IE's current release is version 9 (with 10 available for preview); Opera is on version 11. Could this be nothing more than a way to "catch up", so "Firefox 5" doesn't sound like it's way behind the other browsers? It was speculated that Microsoft named their second console the "Xbox 360" because "Xbox 2" would sound like it was behind "Playstation 3"; this could be another example of toying with customer mindshare.

Or could it be a way to flush some of the old versions out of general support? Many companies I worked at had a policy such as, "Support the current major browser versions minus two", which today would mean "IE 7 and above, Firefox 4 and above, Chrome 11 and above, etc." — releasing a few major versions quickly would push older browsers that don't support up-and-coming standards like HTML5 out of the support window rather quickly.

Whatever the reason, there is one rather large detrimental effect. Extension authors have to certify that their code is compatible with major releases. Because Firefox just got a new major version number, all of my extensions were marked as "incompatible" and disabled, and that way they will remain until their authors update their xpi packages to mark them as compatible with this new major version. Essential debugging tools like FireQuery, HttpFox, even the Java console are among those that are off-limits until they are updated. Even the extension for the corporate virus scanner is disabled, as is Skype's "click to call" extension (which wasn't marked compatible with Firefox 5 either; not that I use it myself, but I have to be able to confirm that, for customers who do use it, its phone number reformatting doesn't make the page unusable).

Firefox, you used to be cool. I didn't mind inviting you over and letting you crash on my couch. But now you've stolen the food from my cupboards, gotten fat and lazy, started leaving your dirty clothes lying all over the house; and you're wild parties broke my antique table lamp without so much as a "sorry" from you. I grow tired of having to clean up after your mess. IE used to be where you are now, but at least he's been working on cleaning up his act.

2011-08-02

Firefox, jQuery, and event binding, revisited

While I had decided for myself that I didn't want to support Firefox anymore, the company that signs my paychecks respectfully disagreed. Because my project is an internal admin tool, we were able to tell our user base not to use Firefox for the time being and assign the bug a relatively low priority — but, it was something that should be revisited and fixed. Having completed my tasks well ahead of schedule, the time was at hand.

The bug had to do with Firefox failing to fire off a jQuery click event handler. I had long since installed the Firebug extension, but all it could tell me for certain was that the event handler code was not being called. I was fortunate, in my searching, to find another extension, FireQuery, which extends Firebug by adding jQuery information to the debug panels. Installing that and viewing the HTML, I could see that the jQuery click event handler simply wasn't there on the switches that weren't working.

Using the development version of jQuery, I stepped through the code that attaches the event handler, and I could find no difference in code execution between the switches that worked and the ones that didn't (unsurprising, since I attached to all of them at once).

The solution, surprisingly, came about when I started mucking around with elements and styles, substituting divs for lis in a desperate attempt to find the cause. To make a long story short, it was the fact that, later in the code, I called a jQuery plugin called "text-overflow" that emulated the text-overflow: ellipsis stylesheet directive that every browser but Firefox supports.

The problem with the code is that, in order to emulate the feature, the plugin creates a cloned copy of the node to "ellipsify" and progressively removes characters until the width of the cloned node fits in the desired width of the original. Then, it sets the contents to be the new text, and destroys the copy. Unfortunately, there are a couple side effects:

  1. If no truncation happens, the contained elements lose their event bindings, since they end up not being the original elements, but copies.
  2. If truncation does happen, when the contents are replaced with the new text, any other elements contained therein (i.e. hidden form fields holding data for a form post) are wiped out.

The first problem might have been overcome by using jQuery's own clone method which clones nodes and any events associated to them. (This was added after the plugin was written, so I don't fault the author for oversight). To be safe, though, I changed the class on the switches' containers and excluded them from my .ellipsis() call. That successfully kept the slider click event handlers from disappearing.

The second problem was the source of another bug, where a jQuery function attempting to look up values from hidden form fields was failing to find the data on elements that were truncated by the text-overflow plugin. I moved the hidden form fields outside of the elements targeted by the function call, and that, too, was magically fixed.

Internet Explorer has a bad (and well-deserved) reputation for needing special development time to do the same thing that other browsers do (although, in my experience, a lot of this would be unnecessary if it weren't for a requirement, usually from an upper-level executive or, worse, marketing, mandating pixel-perfect replication across all browsers), but I've found lately that the pendulum has swung far the other way to deal with the quirks in Firefox.

2011-06-24

You have failed me for the last time, Firefox

In my last post, I mentioned that Firefox was causing issues with my custom on/off control. I solved that problem, but now I have a new one.

When the on/off control renders, it renders with an onclick attribute that calls the function that moves the slider and sets the value of the control in a hidden field. If I want to add any more actions to the control (a common one is to change its color when the value is changed), I use the jQuery .click() function to attach a JavaScript function to its click event.

On a page with several on/off controls, I have bound click events to change their color and enable a save button on change. It's worked every other time I've done this, and it works in this case as well — except for Firefox. For some reason, the last group of controls (which happen to be rendered in an ASP Repeater) do not fire the jQuery click event. What's worse, even though clicking on them does in fact slide the button from one value to the other, it apparently is not setting the hidden form field to change the value — and all that code is being run from JavaScript called in the onclick attribute.

I have found numerous references to jQuery click events not being fired in Firefox, but none of the solutions will help me here. I can't move my code into the onclick event, since I need to register the click handlers from different places; and nothing is apparently wrong with the code as it works as-is not only in other browsers, but in the same browser elsewhere on the same page.

I have spent the better part of a day trying to find where this click event is being "swallowed", and why this code that works perfectly on IE, Opera, and Chrome is not functioning in Firefox. I'm done. It's an internal tool, and we have the luxury to mandate our users' browser choice to some degree.

Firefox, I don't know what your problem is, but I am tired of wasting days of development time trying to find solutions for problems that only exist in Firefox. You are the weakest link. Goodbye.

2011-06-17

Firefox, jQuery, and RegisterStartupScript

I have created a control for the web project I'm working on at work. It's an On/Off slider control, very similar in appearance to the one used by iOS, that uses jQuery to animate the sliding switch on click. Over time, I've added more features and support to it as requirements have come in — its message text is configurable, it can raise its own postback events, and it even works in AJAX postbacks. Someday, I might have to publish the code (maybe when I get rid of its dependence on images and make it size-adjustable as well).

When the control is first rendered, the slider switch is centered halfway between "On" and "Off", and I call Page.ClientScript.RegisterClientScriptBlock that calls that control's "set" function to slide the switch to the correct position. (The control's client click event sets the control's value property and calls this same function to slide the switch back and forth on demand.) It works great on IE, Opera, Chrome… but not Firefox. It would render the first few correctly, but somewhere halfway down the page (depending on how many switches were involved — my particular example page had 10), the slider switches would remain in their centered, "unset" state.

This is getting to be a theme.

I found that, if I added an alert() call before every SetSlider call, then every switch would be set, except the last one. If I reversed the calls so the alert() came second, all the switches would be set.

I was able to solve the problem by wrapping each SetSlider call in a jQuery ready function (i.e., $(function(){ … });). Because the control is self-contained and has no knowledge of other sliders on the page, that's a lot of ready functions; but jQuery seems to handle it without incident. The only other thing I had to deal with was to ensure any startup functions I had to alter the switches' states must come after those SetSlider functions — in my case, moving the code to the page's PreRenderComplete event (because, according to MSDN, the controls' PreRender events, where I registered the SetSlider methods, get called after the page's).

It works, but it still took a couple hours out of my day to find it and then fix it. Thanks again, Firefox.

2011-04-28

4096 bytes is more than enough for everybody

Ever try to make an AJAX call from JavaScript, only to find out that your data is truncated at around the 4k mark? If so, you must be using Firefox 3. Although there is a workaround described here, it means you have to write extra, special code to check for a custom property to really get all the data.

You can test this problem using this QuirksMode page. The page notes other limits on Opera and the Macintosh version of IE, but on the current (v11) version of Opera, I was unable to duplicate it. I also don't have a Mac, so I couldn't verify it there, either. Only Firefox fails to copy the whole text block.

Apparently, this has been fixed in Firefox 4, but that's small consolation when you're tasked with supporting what people are actually using.

2011-04-21

Let me url that for you

Today's programming annoyance is brought to you by the .Net Framework.

If you add a style to an HtmlControl in code — for example, the list-style-image to a list item element — you might use something like the following:

control.Style.Add("list-style-image","none");

You might expect that the style element would contain "list-style-image: none;".

Apparently, you'd be wrong. The tag generated looks like this:

<li style="list-style-image: url(none);">

There is a workaround, but it depends on you being able to specify all list-style properties at once. Using the shortcut, this code:

control.Style.Add("list-style","none none");

will generate the expected HTML, unmangled (and set both the list-style-type and list-style-image properties).

Having list-style-image: url(none) may not make your page look broken, but it will result in a browser request for the file "none" and result in some excessive 404 logs on your server.

If you had an actual image and set it with control.Style.Add("list-style-image","url(imageName.jpg)"); the style would not suddenly get "double-wrapped" with the url() function. By wrapping your value with url() "just in case" you forgot it, Microsoft tries to protect you from yourself, and ultimately causes more headaches by preventing you from setting the style to a perfectly valid value.

2011-01-18

Holding back features on the web

If I were to say there's a CSS 3 feature that all major browsers support except one, which browser would you guess is lacking?

The answer for today is Mozilla Firefox.

It has come up several times on my current project where we've needed to take a variable amount of information and stuff it into a limited space, where aesthetics demand we truncate the data instead of allowing an overflow or a word wrap. The typical way to do this is with an ellipsis, but at what point one should truncate the message is usually the result of guesswork. Checking to see if a string is over, say, 35 characters and cutting it off if it is may work in most cases; but because in a proportional font, the same number of characters can be different sizes depending on which actual characters are used, any fixed number will result in some data elements appearing too short, and a few appearing too long and wrapping or overflowing anyway.

Enter the text-overflow style. In a fixed div or span, setting style="text-overflow: ellipsis;" will cause the browser to truncate the contents with an ellipsis if, when, and where it is needed.

Except for Firefox. Since CSS 3 is still technically in "draft", the coders behind Firefox have decided not to implement text-overflow, despite it being on the bug list since 2005. Mozilla's own developer forum shows that Firefox is the only browser to not implement this to date.

Oddly enough, this is the second time Firefox has failed me recently (the first being a misbehaving feature that plays havoc with AJAX queries).

I found two solutions on the web. One is to use something called XUL binding. The procedure (described here) involves creating an XML document that describes the requested behavior, and then using a Mozilla-specific CSS directive to bind it to the element. Unfortunately, not only does this require another document, but it may conflict with the text-overflow style such that only one or the other will work, but not both. Also, following the comments in the bug, XUL appears to be going away with Firefox 4, and with text-overflow still not implemented, this workaround will work no longer.

The second solution uses the JavaScript library jQuery. The function (which I found at Devon Govett's blog), when applied to a web element, takes the text, recreates it in a clone of the element, starts truncating text as necessary until it finds text that fits in the element, and replaces the text in that element. It's not terribly efficient as it iteratively tests the text on each element, and if the element can change size you either have to update it manually or tell the script to constantly check the element and recompute; but it does do the job that Mozilla won't. Fortunately, we're already using jQuery, so adding an extension was a trivial task.

I don't know if it's some higher ground they're trying to take by not implementing "draft" features, but the fact remains, as an end user of browsers, to me, they appear to be stubbornly behind the curve.

2011-01-14

ASP.Net, Dynamic Controls, and ViewState, revisited

At my current job, we are encouraged to share tips and ideas with other developers. I thought it could be useful to demonstrate the problem of dynamic controls and ViewState and my solution (posted three years ago here), since it not only is a problem that could come up in our web development, but it provides a useful opportunity to review the page life cycle.

So I grabbed my sample code and opened it in Visual Studio 2010. The good news is, it still works as advertised. However, I wanted to demonstrate the problem along with the solution; so I removed all my "extra" code. I was rather startled to find that the old problem didn't manifest itself. When I typed in data to one control and clicked a button to add another, the first control retained all its data.

It seems that the .Net Framework got some improvements over the years. The first improvement is that it seems ASP.Net is far more consistent in naming controls that are added to the page at run-time. (Part of the original problem was, when a control was loaded on page load vs. later in an event handler, the dynamically-assigned ID would be different.) The second is, if a control is loaded later in the life cycle, it does actually go back to the ViewState and re-load any applicable data. (It used to be very unreliable in this regard.)

I did find that things were not all roses. If you add a bunch of controls and start removing controls from the middle of the list, control data would get lost. Also, if you delete controls in the middle of your list and re-add controls, the controls may get added in the middle of the list instead of the end.

The solution is much easier than it used to be:

  • Create a member variable to hold the list of IDs (or whatever data is required to recreate the control and its ID) — in this example, I'm using private List<string> _childControlIds to just store the IDs, since the control type and location is always the same constant.
  • Create a Page Load event handler that looks like this:
    private void Page_Load(object sender, EventArgs e) {
     if (!IsPostBack) {
      _childControlIds = new List<string>();
      addAField(null).InitializeNewControl(); //Optional - create a new control, and initialize its data
     } else {
      if (ViewState["ControlCount"] as string[] != null) {
       _childControlIds.AddRange((ViewState["ControlCount"]) as string[]);
      }
      foreach (string controlId in _childControlIds) {
       addAField(controlId); //Create an existing control with its already-established ID
      }
     }
    }
  • The addAField method looks like this:
    private CustomChildControl addAField(string fieldId) {
     CustomChildControl cc = (CustomChildControl)LoadControl("CustomChildControl.ascx");
     if (String.IsNullOrEmpty(fieldId)) {
      cc.ID = String.Format("CUST{0}", DateTime.Now.Ticks); //new control; create a unique ID
      _childControlIds.Add(cc.ID);
     } else {
      cc.ID = fieldId; //existing control; reuse ID
     } 
     this.CustomControlsPlaceHolder.Controls.Add(cc);
     cc.DeleteControlClick += new EventHandler(DeleteCustomControl);
     return cc;
    }
    Notes:
    1. It no longer appears to be necessary to add the control before setting its ID — the ViewState manager seems to pick it up just fine either way.
    2. The custom control in my example has its own delete control and fires an event, that this page subscribes to. Your implementation may vary.
  • The DeleteCustomControl method looks like this:
    private void DeleteCustomControl(object sender, EventArgs e) {
     CustomChildControl cc = sender as CustomChildControl;
     if (cc != null) {
      _childControlIds.Remove(cc.ID);
      this.CustomControlsPlaceHolder.Controls.Remove(cc);
     }
    }
  • The method to add a control (in my case, a button on the page) is simply:
    private void AddButton_Click(object sender, EventArgs e) {
     addAField(null).InitializeNewControl(); //Create a new control, and initialize its data
    }
  • And finally, a Page PreRenderComplete event handler (because it's late enough in the page lifecycle; PreRender itself may be sufficient for your needs) that sticks the control ID list in ViewState:
    private void Page_PreRenderComplete(object sender, EventArgs e) {
     ViewState["ControlCount"] = _childControlIds.ToArray();
    }

And that's it. Surprisingly simple.

I don't know at what point this changed (or if I even over-architected the original solution — a distinct possibility). This could be an improvement in .Net 3.5, or it could be something "fixed" in a service pack along the way. The only thing I can say for certain is this much simpler method works quite well in my admittedly simple example.

2010-10-24

IntelliMouse Explorer 1.0 in Windows 7

I have an old IntelliMouse Explorer. It's the original version, wired, but still works great. Alongside my Natural Keyboard Pro, it's an old, functional, comfortable piece of hardware that I refuse to get rid of. The replacements that have come along since often fall short in various ways. And, much like the Natural Keyboard Pro and other strong Microsoft hardware input devices from years gone by, Microsoft's software drivers have stopped supporting them.

There's no real reason for them not to work today. The keyboards haven't changed much, except to add or change the extra control keys sprinkled around the standard 121. Mice, even less so; they have the same X-Y directional input, five buttons, and a scroll wheel they've had for over a decade. But if you install the current version of IntelliType or IntelliPoint, they will refuse to detect your older keyboard and mouse; and even though the operating system will use them just fine for standard functions, all the fancy buttons and the ability to remap them (that used to work on older versions of the software) won't be available.

I came across this blog post on Blogfeld.com that describes in detail how to get a Natural Keyboard Pro to have full functionality in Vista and Windows 7. I followed these instructions earlier this month, and I can verify that they work flawlessly with the current version of IntelliType software (currently version 8). I thought maybe the same technique could be applied to get my old IntelliMouse Explorer to work with IntelliPoint 8 as well.

I won't post the details here — Blogfeld already does an excellent job at describing everything — I'll just indicate what I did to apply his technique to IntelliPoint.

I searched for an old version of IntelliPoint off of Microsoft's download site. You can still download IntelliPoint 5.2 from their site (link as of the time of this post is here, but you can search for "IntelliPoint 5" on download.microsoft.com to find it) and installed it on a Windows XP workstation in order to get the old files.

On the Windows 7 machine, I opened up the point64.inf file (IntelliPoint's version of IntelliType's type64.inf — and yes, I'm using 64-bit; the 32-bit version would naturally be point32.inf), and in the [MsMfg…] section, I added the following string to the block of IDs listed:

%HID\Vid_045E&Pid_001E.DeviceDesc%=HID_Filtr_Inst, HID\Vid_045E&Pid_001E

Further down in the [Strings] section, I added:

HID\VID_045E&PID_001E.DeviceDesc="Microsoft USB IntelliMouse Explorer (IntelliPoint)"

The next step was to modify the IPointDevices.xml file. This one required a little more thought, as IntelliPoint 5 did not have an IPointDevices.xml file to copy from. I noticed, however, that the IntelliMouse Explorer 3, which is supported in IntelliPoint 8, has the exact same configuration as the IntelliMouse Explorer 1. So, I found the <Device> section that describes the IntelliMouse 3, copied it, and pasted it to the end of IPointDevices.xml. I changed the <Name> node to read, simply, "IntelliMouse Explorer", changed the <OemAbbreviation> node to "IME", and changed the value under <HWID Type='PID'> to read "0x001E" (the last four characters of the USB ID, used in the point64.inf file above). I also had to change the ID in the <Device> node itself to something that was not used elsewhere in the file — '10' was good enough.

I followed the rest of the instructions from Blogfeld, and sure enough, it worked great. The configuration screen in IntelliPoint uses the images of the IntelliMouse Explorer 3, and it allows configuration of all five buttons and the scroll wheel, including per-application settings, like any other mouse it "officially" supports.

2010-10-20

Firefox + Ajax + Refresh = Disaster

Usually, when coding a web page that's targeting users of IE and Firefox, the browser that's going to cause the lesser amount of problems is Firefox. So I was genuinely surprised when I came across a bug reported for Firefox only that came down to what I consider the browser misbehaving.

The requirements for our app included a series of dropdown boxes, where the selection a user makes in one dropdown drives the choices that appear in the next one (what's commonly referred to as a "cascading dropdown"). For a nicer user experience, this is typically done with AJAX, so that the request/response that generates the second dropdown upon selection of the first doesn't require an entire page refresh. ASP.Net makes this really easy with the UpdatePanel control. Controls inside of an UpdatePanel can be refreshed without reloading the entire page. It's not as lean as a pure AJAX call could be, since the server reprocesses the whole page, but the coding time is greatly reduced.

Our environment includes a standard master page that includes a ScriptManager component (required for using UpdatePanels) and the following script:


<script language="javascript" type="text/javascript"> 
    function onEndRequest(sender, args) {  
        ajaxPostBackButton.disabled = false;  
        var error = args.get_error();  
        if (error != null) {  
            window.location = "../errorPage.aspx";  
        }  
        var updateProgressPanel = $get("<%=this.UpdateProgressPanel.ClientID %>");  
        updateProgressPanel.className = "HideObject";  
    }  
    function onBeginRequest(sender, args) {  
        var ajaxPostBackButtonId = args.get_postBackElement().id;  
        ajaxPostBackButton = document.getElementById(ajaxPostBackButtonId);  
        ajaxPostBackButton.disabled = true;  
        var updateProgressPanel = $get("<%=this.UpdateProgressPanel.ClientID %>");  
        updateProgressPanel.className = "DisplayProgressLayer";  
    }  
    var ajaxPostBackButton;  
    Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(onBeginRequest);  
    Sys.WebForms.PageRequestManager.getInstance().add_endRequest(onEndRequest);  
</script> 

The script, in essence, binds a couple of functions to the AJAX start and stop methods that do this:

  • On start:
    • Disable the control used to trigger the AJAX call (this helps guard against double-posting)
    • Show a div that contains a "loading" animated gif to let the user know something's happening
  • On end:
    • Enable the control used to trigger the AJAX call
    • Check for an error, and if found, redirect the browser to the standard error page
    • Hide the div with the "loading" gif

Now, if the user makes a selection in the first dropdown, everything runs normally, and the second dropdown appears. If the user then presses F5 to refresh their browser, the browser reloads the page from its initial load state, i.e., with the first dropdown with the initial "Please select…" option selected, and no second dropdown.

At least, that's the way it works in IE. In Firefox, what I was seeing was, the first dropdown was getting selected to the option I had selected before I hit refresh, it was disabled, and there was no second dropdown.

Finding out why was no easy task. With the help of Firebug, I was able to show that, on refresh, neither the onBeginRequest nor the onEndRequest methods were being called, and those were the only places the dropdown's enabled state was being tinkered with. I could only conclude that Firefox itself was setting this state. But why, and how do I stop it?

A couple hours of internet searching on why a dropdown in an UpdatePanel would be disabled failed to yield any useful information. I did find one user complaining about Firefox repopulating form values with prior input on refresh; unfortunately, that user's request for how to get around it was met with a snarky response about how it was a useful feature of Firefox and how the user was mentally deficient for not appreciating it. Sorry, but when you're coding a web application that is trying to control the content of form values and states and react to changes, and the browser breaks all rules and changes those states without raising any events to react to, I'll have to go with the feature being deficient and buggy.

Coming at the problem the next day with a fresh set of search terms, I came across this blog post: Firefox refresh viewstate updatepanel bug hell!!! The post describes a more serious error that can occur with Firefox's mucking about with a form after refresh that got updated with AJAX. The solution, renaming the form's ID on every refresh, seemed a little more of a brute-force hack than I wanted, and he mentions it doesn't work well in a master page scenario anyway (which we're in).

The comments on that post, however, point to an article on developer.mozilla.org that describes the feature in more detail and, more importantly, how to turn it off. By adding the nonstandard attribute autocomplete="off" to the page's <FORM> tag, it suppresses this bothersome behavior and lets the page work as expected.

We're now determining if this action is something that should be done site-wide (add it in the master page's markup), as it could be an uncaught bug on other pages; or if it's something that should be done on a page-by-page basis, by adding this.Page.Form.Attributes["autocomplete"] = "off"; to the prerender event of any affected page.

2009-09-09

A nullable DataGridView CalendarColumn

A while back, I managed to fix a couple bugs with the sample DataGridView CalendarColumn control that made it much more usable. Today, I came across one more issue. It's pretty well known that the DateTimePicker, despite having support for a checkbox that lets you turn a date on or off, does not directly support "null" as a valid value. There are a bunch of ways to get around this, but what I came across was a need to support this inside of a DataGridView.

I started with the task of making the CalendarColumn configurable in such a way as to be able to turn the checkbox on or off at will (well, at least, at the moment of construction). That part's easy; I added a new constructor to CalendarColumn to take an "isNullable" flag:

public CalendarColumn() : base(new CalendarCell()) { }
public CalendarColumn(bool isNullable) : base(new CalendarCell(bool isNullable)) { }

Then, I added a class-level variable to my CalendarCell class, and initialized it in the constructor:

public class CalendarCell {
    private bool isNullable = false;

    public CalendarCell() : base() { 
        this.Style.Format = "d"; 
    }

    public CalendarCell(bool isNullable) : this() {
        this.isNullable = isNullable;
    }
[...]

The next modification comes in InitializeEditingControl. Immediately after getting the reference to CalendarEditingControl ctl is set:

    ctl.ShowCheckBox = this.isNullable;
    if (this.Value != null && this.Value != DBNull.Value) {
        if (this.Value is DateTime) {
            ctl.Value = (DateTime)this.Value;
        } else {
            DateTime dtVal;
            if (DateTime.TryParse(Convert.ToString(this.Value), out dtVal)) ctl.Value = dtVal;
        }
        if (this.isNullable) ctl.Checked = true;
    } else if (this.isNullable) {
        ctl.Checked = false;
    }

(Note that I added a little checking around the value setting area, because I'm paranoid like that.)

Next, DefaultNewRowValue:

public override object DefaultNewRowValue { get { if (this.isNullable) return null; else return DateTime.Now; } }

Here's what I found out you don't change. The ValueType property always returns typeof(DateTime). If you change this to typeof(DateTime?), what ends up happening is, if you bind to a DataTable, it attempts to insert an actual null into the table. Because (for reasons I have yet to believe necessary) null and DBNull are two completely separate and incompatible things, this fails. Apparently, by leaving the value type as non-nullable, a null value will get translated appropriately in the binding.

We're not quite done yet. In the CalendarEditingControl class, the EditingControlFormattedValue property needs to be updated:

public object EditingControlFormattedValue {
    get {
        if (this.ShowCheckBox && !this.Checked) {
            return String.Empty;
        } else {
            if (this.Format == DateTimePickerFormat.Custom) {
                return this.Value.ToString();
            } else {
                return this.Value.ToShortDateString();
            }
        }
    }
    set {
        string newValue = value as string;
        if (!String.IsNullOrEmpty(newValue)) {
            this.Value = DateTime.Parse(newValue);
        } else if (this.ShowCheckBox) {
            this.Checked = false;
        }
    }
}

And presto, a checkable, nullable DateTimePicker column that works in a DataGridView bound to a DataTable.

2009-07-29

This is not the SelectedValue you're looking for

Oh, here's a fun one.

I have a list of locations that I'm retrieving from a web service, and I want to display them in a ComboBox. I also want to have a blank entry at the top of the list, so "nothing" can be selected. While I'm sure there are ways to do this with less code, creating binding objects and such (especially since the retrieved object list is already pre-sorted in the order I want), I've resorted to just creating a DataTable with an ID and display column and copying the values I want into it. It does make things easier when I have one combo box filter another, as DataTables already have code written for searching and filtering.

Anyway, after I get my data, I have a DataTable that looks like this:

LocationIdLocation
DBNull.ValueString.Empty
2Aurora
7Colorado Springs
1Denver
3Ft. Collins
4Longmont
6Parker

Now, I bind it to my ComboBox like this:

combo1.DataSource = dataTable;
combo1.DisplayMember = "Location";
combo1.ValueMember = "LocationId";

Note that you have to set the DataSource first and the DisplayMember & ValueMember properties second, otherwise you get a ComboBox full of "System.Data.DataRow". Nice.

Now, on this form, when a value is selected, the selected value is used to filter another list; and, when the form is submitted, the selected value is passed off to a web service. How do you find out when a value is selected, and what that value is? Simple; bind to the SelectedValueChanged event, and query the SelectedValue property, right? In theory.

In practice, no. To make a long story short, I dropped a label onto the form so I could see what was going on in "real time". First, the code:

private void combo1_SelectedValueChanged(object sender, EventArgs e) {
     label1.Text = Convert.ToString(combo1.SelectedValue) + " - " + Convert.ToString(((System.Data.DataRowView)(combo1.SelectedItem))["LocationId"]);
}

And the results:

ComboBox selectionLabel1.Text
empty - 
Aurora1 - 2
Colorado Springs2 - 7
Denver3 - 1
Ft. Collins4 - 3
Longmont6 - 4
Parker7 - 6

Incidentally, the SelectedText property was always blank.

On the up-side, as you can see, I found a way to get the real value (using the SelectedItem and casting appropriately). On the down-side, it means I have to use that construction to get the selected value on this and every other similar ComboBox (there are actually four on this form alone), since SelectedValue, apparently, doesn't.

2009-06-10

I'll handle any key you want... except that one

Here's one of those that makes a little more sense when you think about it, but it still leaves you scratching your head until you figure out what's going on. And then you think about it some more and realize, no, it doesn't make sense at all.

When you want to capture keypresses in a control on a Windows form, the "best" place to do that is the KeyDown event. It gives you the ability to capture keys, the state of the modifier keys (Control, Alt, Shift), and to mark that keypress as being "handled" so the key does not continue on as actually being typed. It also provides access to capturing unprintable keys (like Escape or Insert).

However, one thing it does not do is let you capture the Tab key. This is because the control doesn't even get a KeyDown event when the Tab key is pressed. (Indeed, Microsoft considers it a bug when it does trigger a KeyDown.)

You can sort of rationalize this behavior when you realize that the Tab key is "special". The form uses that key to determine when to move from one control to the next. So, in that way, it would make sense for the Tab key to not even get to the control. However, you can easily debunk that rationalization with two controls: a DataGridView and a TextBox. A DataGridView, with its StandardTab property set to the default of False, will, when a Tab key is pressed, take it upon itself first to move the active cell through the grid, and only when the last cell is reached then let the form move focus to the next control. Likewise, a TextBox that has both Multiline and AcceptsTab properties set to True will take any Tab keypresses and add a tab character to the input, never passing that keypress to the form.

In any case, it seems that, for whatever reason, the Tab key is handled by the form sometime after these possible control overrides, but before the KeyDown event could get fired.

Solution? Well, it turns out, there are two.

The first one I found is to override the form's ProcessCmdKey method. Start by calling the base ProcessCmdKey method, which returns a boolean that indicates whether the key was handled. Then, check to see if this.ActiveControl is the control for which you want to trap the Tab key, and if the keyData parameter passed in to ProcessCmdKey contains the Tab key ((keyData & Keys.Tab) == Keys.Tab). If so, do whatever it is you wanted to do, and set the boolean return value to true, indicating you've processed the key yourself. At the end of the method, return the boolean. (I found a simple example here.)

The annoying part is, if you were trapping for more than just the Tab key in the control's KeyDown event, you now have your logic split in two different methods (control_KeyDown and form override ProcessCmdKey). Also, if you are doing this for multiple controls (as I was for, coincidentally, a DataGridView and a [single-line] TextBox), you have to copy and combine the logic from the other controls' separate KeyDown events into the ProcessCmdKey method, testing the ActiveControl to make sure you know which control you're in.

The advantage, however, is that you only have to test the cases you're interested in. If, for instance, you want to catch a Tab, but still allow a Shift+Tab to default to the standard form's handling, you just test for an un-shifted Tab and do your work, and let the Shift+Tab condition "fall through".

The second solution is to bind to the controls' PreviewKeyDown event. This one fires in that sweet spot before the form claims it as a "command key". The event gets the parameter PreviewKeyDownEventArgs e. If you want to handle the Tab key in the KeyDown event, just check to see if e.KeyCode == Keys.Tab, and if so, set e.IsInputKey = true.

The disadvantage to this method, however, is that now you are responsible for moving the focus as appropriate — you can't just let the handling "fall through" to the form, because it won't handle it anymore. Depending on how confident you are on the structure of your form, you can either call another control's Focus method or the form's SelectNextControl method (and don't forget to set the Handled property in KeyDown, otherwise the new control might receive the same Tab keypress as well, or it might try to type the Tab as input into the old control — either way, the results might not be pleasant).

Personally, I like the second solution better. Although there is more logic required to do what should be standard Tab handling, it means the logic for my non-standard Tab handling is in one place per control, not split across two different events.

2009-04-20

Fixing the DataGridView's CalendarColumn sample

In a DataGridView, it is sometimes convenient to have a column that is a DateTime that allows a user to use the standard DateTimePicker control for choosing dates. Microsoft has a code sample for a CalendarColumn class that does this, and it's more or less the standard.

Except if you actually try to use it in a WinForms application, you'll find it has two rather glaring flaws:

  1. If you try to type in a date, and you don't type in a full two digits for the month or date or four digits for the year and then tab out of the control, your change won't be committed.
  2. If you type in a date or part of a date and tab through to the same column in the next row, and you start typing, the first field highlighted (month, date, year) will be the last field highlighted when you left the last row. (Example: you type in "04/04/2009", tab until the next row's date column, and start typing "04", you'll find you're editing that date's year.)

I found this forum thread that addresses the first concern. ALight's answer involves adding an event handler to the DataGridView's CellValidating event. Because we use grids all over the application, it would've been very inconvenient to alter every grid and add this code. Instead, since our grids are built dynamically, I did it by adding this code to the CalendarColumn class (that inherits DataGridViewColumn):

protected override void OnDataGridViewChanged() {
    base.OnDataGridViewChanged();
    if (this.DataGridView != null) {
        this.DataGridView.CellValidating += new DataGridViewCellValidatingEventHandler(CalendarDataGridView_CellValidating);
    }
}

protected void CalendarDataGridView_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) {
    if (sender is DataGridView) {
        DataGridView dgv = (DataGridView)sender;
        if (e.ColumnIndex >= 0 && e.ColumnIndex < dgv.Columns.Count
            && e.RowIndex >= 0 && e.RowIndex < dgv.Rows.Count) {
            if (dgv.Columns[e.ColumnIndex] is CalendarColumn && dgv.Columns[e.ColumnIndex] == this) {
                if (dgv.EditingPanel != null) dgv.EditingPanel.Select();
            }
        }
    }
}

When the CalendarColumn is added to the grid's Columns collection, its DataGridView property is updated, and this routine fires to bind the method to the grid's CellValidating event. Multiple columns will cause multiple event handlers to be bound, but the dgv.Columns[e.ColumnIndex] == this part of the if statement should ensure that only one instance will run the code for any given column. (Note that I have not tested this in cases where columns may be removed from the grid. It should probably work fine, as the event will still be bound and still fire, but the column will never match.)

To solve the second problem, I did some experimentation with the existing control methods. In Microsoft's sample, there is a comment in the CalendarEditingControl's method PrepareEditingControlForEdit that reads "No preparation needs to be done." I found this to be not quite correct. The following change resets the control so, when you start typing, you start editing on the month, like you'd expect:

public void PrepareEditingControlForEdit(bool selectAll) {
    if (selectAll) base.RecreateHandle();
}

Seems to work so far, anyway.