Feeds:
Posts
Comments

Archive for the ‘Programming’ Category

This is a serious question, though maybe you’re thinking about the wrong answers. See, I understand why one would want to decouple communication between a subject and observer. What I don’t understand is why we need a “fancy schmancy aggregator object” to do this. I’ve seen several designs, and several criticisms of every one of those designs and in every case I wonder if there’s a design that could make everyone happy. However, more importantly, I come back to wondering why we need an aggregator at all. Seems to me a static event and Publish method would eliminate the need for an aggregator at all.

public static class Events
{
    public static event EventHandler CustomerAdded;

    public static void PublishCustomerAdded(object sender)
    {
        EventHandler handler = CustomerAdded;
        if (handler != null)
        {
            handler(sender, EventArgs.Empty);
        }
    }
}

To subscribe to the “aggregated event” you just follow normal conventions.

private void Subscribe()
{
    Events.CustomerAdded += OnCustomerAdded;
}

To publish the “aggregated event” you simply call the method provided for this purpose.

private void Publish()
{
    Events.PublishCustomerAdded(this);
}

There’s lots of benefits to this design.

  • No “magic strings” are utilized.
  • We’re strongly typed.
  • There’s no runtime overhead involved in registering and looking up an “aggregated event”.
  • The code follows the same patterns as “non-aggregated events” do.
    • This means the same mechanisms can be employed for “weak events”, though it also means strong references by default.
    • This also means there’s no artificial constraints on the event signature.
  • You can control who can publish the event. The Onyx View.ViewCreated event is an example, where only instances of the View type itself can publish the event, but subscribers are still decoupled from the actual instances of the View type.

The only downside I see is that there doesn’t have to be a single “registry” for aggregated events (though there can be), but whether or not that’s a downside is certainly debatable. I’m really curious what other people see as reasons to prefer an EventAggregator over this simple pattern?

Read Full Post »

In case you’ve missed the announcement, Microsoft has released a new toolkit as part of the WPF Futures which is meant to aid in the development of WPF applications that follow the M-V-VM (hence forth, I’ll call this View Model) pattern. Here’s my impressions of what was released.

First, I’ll look at the Word documents included. There’s two parts to this. Part 1 provides a general introduction to the pattern. The first section of this document discusses the “Model-View-X Paradigm”.  Eek. That rankles with me. Sorry, I’m nit-picking here, but this is meant as constructive criticism. Why Paradigm, instead of Pattern, which is the accepted word to use? In any case, this section seems to struggle a bit with describing what differentiates View Model from the other patterns, including a reference to Presentation Model with little explanation to why there’s a mention (such as, they’re the same pattern, perhaps?). I think this just lends further proof that we either should be able to communicate the differences, or we should stick with a single name for the pattern. The rest of this document tries to justify the pattern with a sample application, but I think the justification is poorly presented. The problem is that they work backwards, starting with a spaghetti implementation, refining it a bit and trying to explain why it’s better. I think it would work better to state why one would use the pattern up front, and then demonstrate the differences in an implementation. As it is, trying to figure out why requires reading the entire document. I’ll also note that the current structuring required putting the unit testing last, which many TDD and BDD practitioners will take issue with. When unit testing is such an important reason for why you’d use this pattern, I really think this is a mistake. I’ll also note that the section on commanding could use some work. There’s little explanation as to why RoutedCommands are problematic, and the one reason given isn’t even valid (the target of a RoutedCommand must be part of the UI element tree… which is disproved by the Onyx command binding services that easily create command bindings for RoutedCommands with the ViewModel as the target).

Part 2 provides a walkthrough of writing a contact book application following View Model pattern. The overview describes the roles of the three components: Model, View and ViewModel. I’ll take exception to the description of the ViewModel, which explicitly states the ViewModel should not be aware of the View. That’s not really a requirement of the pattern, and in fact, while you should avoid tight coupling as always, some knowledge of the View is often beneficial if not necessary. I don’t mind code/frameworks that are opinionated on this topic, but this isn’t a framework, and I’m not sure that Microsoft should be in the business of being opinionated on topics like this.

When you run the project template, it creates a project with four folders: Commands, Models, ViewModels and Views. I’m not sure how I feel about that. I will say that I’ve never used a folder structure like this.  My Models are nearly always in a separate project. I prefer my ViewModels to live right along side the Views, as it makes it easier to find and open these related components. Finally, my Commands are usually defined in the ViewModels and not in separate classes, much less folders.  This structure is typical in a web MVC architecture, but this isn’t MVC and I just don’t think I like this layout.

The template imperatively creates the View and the ViewModel and associates them in code. I don’t care for this at all, much preferring a declarative approach using the XAML markup. The association is also solely through the DataContext, which is fine, but I prefer to use a different DP (which usually sets the DataContext as well).  Within a View, the DataContext will change frequently (for instance, in ItemsControls), but I still want to be able to talk about the ViewModel associated with the child elements.

The template creates a DelegateCommand class with a good implementation. One sincerely hopes this is a stop gap solution, as DelegateCommand really should reside in a library (preferably the BCL) and not be generated by the template. In any event, the DelegateCommand is certainly a best practice, and appears to be the only such concept actually included here, which is strange. Where’s the ViewModel base? Where’s the helpers for validation, INotifyPropertyChanged, etc.  I realize this isn’t a framework, but there should still be more support from the templates.  How about a ViewModel item template that creates a class that implements INPC? One hopes that version 0.2 starts to address the low hanging fruit here.

The section on unit testing is going to be controversial. Not only did we not follow TDD/BDD as I pointed out earlier, but the walkthrough utilizes the “Create Unit Tests…” feature, which is a questionable practice even if you’re not going to follow “test first”. The “ClearContactBookCommandTest()” is long and complicated with a total of 6 asserts spread out through the entire test.  Definitely not unit testing best practice here.  What’s really heart breaking is that there’s even three sections of asserts with comments that read “// Validate something”.  Isn’t that a clear indication that they should be separate tests?

I hope future releases do a better job of promoting best practices here. The Toolkit is probably useful to some, though I won’t be using it, and I can’t point someone new to View Model at this as a learning point.

Read Full Post »

I previously wrote about some ideas I was toying with for Specificity to add a base class to aid in writing BDD style test fixtures in a portable (works with any unit testing framework) fashion. I’ve been using a solution like I posted then ever since, and have concluded the design is a bit fragile. Relying on the actual test methods to properly use the Result property in order to ensure Arrange and Act are called, just isn’t a production quality design.

That got me to thinking, however. What I was trying to do here was a poor man’s attempt at AOP (Aspect Oriented Programming). The .NET world already has a better (though not perfect) solution for AOP in the form of ContextBoundObject. I should be able to leverage that and get a much better implementation of the concepts I wanted. In fact, there’s a CodePlex project that uses this idea already, called MSTestExtensions. I didn’t care for the way you had to write extensions using MSTestExtensions, and it wasn’t going to work out of the box for my BDD base class in any event, so I had to just borrow the idea without using it. So, I spent a couple of days hacking, and the result is now in the Specificity project.

I’m still not entirely convinced I’ve got the correct extension mechanism baked in, so if you start to use this code, beware that I’m very likely to make breaking changes in this area of the API.

So, what did I come up with? First, there’s an ExtendableTestFixture base class that you inherit from if you want to be able to “extend” your test methods. This provides all of the ContextBoundObject magic, as well as provides the necessary hooks for you to be able to create TestExtensionAttribute attributes that actually modify the process flow of the test methods, allowing you to add code before or after the test method. The way you code a TestExtensionAttribute is where I’m most likely to make breaking changes.

The BDD base class, called Observation in the previous post, is now called SpecificationContext. It inherits from TestExtensionBase and uses an internal SpecificationContextAttribute to cause all test methods to run the Arrange, Act and Teardown virtual methods. The use is the same as in the previous post (minus the Result property magic), but the implementation is now much more robust.

There is a down side. ContextBoundObject adds some overhead to object creation and method calls that will slow your tests down. It’s my experience that the overhead isn’t all that great, and I’m more than willing to except it for the benefits provided here, but you’ll have to judge for yourself.

There’s a couple of other test extension attributes included as well, just to illustrate that this is useful even if you’re not interested in using the SpecificationContext concept. There’s an ExpectedDurationAttribute which allows you to declaratively specify that a test shouldn’t take longer than some amount of time, and a TestTransactionAttribute that declaratively wraps your test method in a TransactionScope. I’ll admit, I’m not convinced attributes like these are appropriate… I’d prefer the non-declarative approaches, for many of the same reasons that I prefer Specify.That(action).ShouldThrow<Exception>() over ExpectedExceptionAttribute. However, I don’t want to force my opinions here on users, and I did need some attributes to illustrate what’s possible here.

I’d love to hear feed back on this.

Read Full Post »

I’ve been using an assertion framework at least similar to the one I’ve put up on CodePlex called Specificity for quite some time now. The main goal of that library is solely to provide an assertion framework that’s extendable and discoverable, and it does so in a fashion that doesn’t tie it to any specific test framework. The naming conventions used within Specificity follow a more of the BDD form rather than the TDD form, with Should instead of Assert. However, I’ve not actually used any of the BDD frameworks available for .NET, mostly because I have to use MsTest at work and I am accustomed to using it and so continue to use it outside of work. Recently, I’ve been experimenting with following BDD at least to the extent that my unit testing framework of choice will allow (the Test terminology leaks through in the attribute names used, which BDD purists would not care for). This has eventually lead to me experimenting with a base class that I think may be useful for inclusion in Specificity. This base class isn’t tied to any testing framework, and thus could be a drop-in base class for tests you write using any existing testing framework, much like the assertions in Specificity. The base class is really quite simple, though I’m sure it could be expanded. Part of the inspiration came from Jean-Paul S. Boodhoo, though I’ve simplified it quite a bit in ways that most developers would be more comfortable with, I think. Here it is in all its glory (such as it is):

public abstract class Observations<TResult>
{
    private bool acted;
    private TResult result;

    protected TResult Result
    {
        get
        {
            if (!this.acted)
            {
                ArrangeAndAct();
            }

            return this.result;
        }
    }

    protected virtual void Arrange()
    {
    }

    protected abstract TResult Act();

    private void ArrangeAndAct()
    {
        Arrange();
        this.result = Act();
        this.acted = true;
    }
}

That’s it. I told you there wasn’t much to it. I’ve chosen to follow the AAA (Arrange, Act and Assert) terminology rather than the “context” and “because” terminology used by Jean-Paul, but the examples from the blog post of his that I linked are easily translated. Here’s his first example, just to give you the flavor of how this is used.

[TestClass]
public class When_adding_2_numbers : Observations<int>
{
    protected override int Act()
    {
        return 2 + 2;
    }

    [TestMethod]
    public void should_result_in_the_sum_of_the_2_numbers()
    {
        Specify.That(this.Result).ShouldBeEqualTo(4);
    }
}

Of course, this was in MsTest, but you should be able to easily translate it to whatever test framework you use. I’ve not leveraged the Setup concepts available from most testing frameworks because it would tie the code to a specific testing framework, and because some frameworks (looking at you, xUnit.net) don’t have the concept, relying on the constructor instead. I’ve not addressed cleanup here, though it should be possible to work that one out. If your test methods are side effect free (they should be), there’s little purpose in enabling cleanup, so I’ve not bothered to think to deeply on that one.

This seems extremely simplistic, but the point behind the base class is to force a BDD approach to testing, where you have a test per feature, rather than a test per class. It’s a poor man’s addition to a TDD framework to enable a BDD style.

I’d love to hear input on this one. What do people think of the idea? What about the implementation? Should something like this go into Specificity? Nit-pick me to death, please.

Related: Behavior Driven Design and Specificity – Part II

Read Full Post »

Onyx and Specificity

In case you’ve not seen it other places, I’ve started a couple of CodePlex projects myself. The first is an assertion library for unit testing, with a focus on extensibility and intellisence discover ability, called . The second is a WPF framework to aid in the development of applications that use the M-V-VM design, called Onyx. Both are still under development and haven’t yet had a release, but the code is usable and worth checking out.

Read Full Post »

Dueling Banjos

In this case, the “banjos” are FXCop and StyleCop.  Can’t we all just get along?

Here’s the deal. I’m a big proponent of using static checkers such as FXCop and StyleCop.  I think they go a long way towards improving the quality of code. However, lately there have been a few things about these tools that have been driving me crazy.

Let’s start with FXCop. There’s a couple of warnings you run into frequently when developing in WPF (at least, I do). CA1810 is a warning about performance when you use a static constructor instead of a static initializer. My first though is, how bloody important is this sort of optimization? The hit can’t be that big, especially when you consider it’s a one time hit and not something that can be amplified by usage in a loop. This sounds like premature optimization to me, and we all know the famous quote about that! Normally, however, this would be a minor thing to comply with, and wouldn’t occur all that often. In fact, my natural instinct is to use an initializer. However, in WPF the static constructor is often required for things like registering class event handlers with the EventManager. You can’t really use an initializer for this, and suppressing this every time it happens is a PITA. At least with FXCop I can turn the rule off, even if I do have to manually do that for each and every project (hint Microsoft: we could use some sort of solution based configuration here).

The next FXCop warning that’s annoying me a lot is CA1004, which complains when you use a generic type parameter only in the return type and not as a parameter type. Seems this is supposed to be “confusing” for developers. Well, I call BS. If you don’t understand how generics work, you probably shouldn’t be coding in .NET languages that support the concept. If you look around it’s really not uncommon at all to have code that uses a generic parameter as the return type, as syntactic sugar to simplify scenarios where you’d otherwise have to employ a cast. Again, though, I can turn this one off.

StyleCop has a lot of rules I don’t agree with as well. The bloody file header stuff serves no real purpose, other than keeping specific legal counsel happy. Legally, you don’t have to provide a copyright statement at all: your code will still be protected by copyright laws. You certainly don’t have to repeat it for every file in a project. Not only does it clutter the source, it’s a PITA when the copyright notice must change (such as a change on the date). I also hate the warning that wants you to place the imports inside the namespace. VisualStudio doesn’t like this, either. The default templates put the imports outside, and intellisense helpers have a hard time with indentation for imports they add when inside the namespace. All for something that, despite the attempts to make it sound like a sound technical thing to do, it’s really just a “tabs vs. spaces” kind of debate.

However, what I’m here to talk about today is how these two tools sometimes don’t like each other. The specific issue I want to talk about is with naming member variables. See, StyleCop doesn’t like you to use “warts” or “hungarian notation” at the beginning of member variable names. This means the typical usages of “m_” and “_” at the beginning of member variable names is verboten, according to StyleCop. OK, let’s not get into any “religious arguments” over this. I prefer the warts, honestly, but I can live without them. So, the warts are gone. I no longer use them, in order to keep StyleCop happy. Unfortunately, this means I often make FXCop unhappy. See, FXCop has this warning, CA1500, which complains when you use a local variable name that’s the same as a member variable name (thankfully, it doesn’t complain when you do this with parameters to constructors. However, it’s still fairly common to need a local representation of data for the same thing as the member. Argh!!! I’ve actually resorted to use names like “theWidget” instead of “widget” for local names, just to shut the tools up. This is a wart, but because it’s a word, StyleCop won’t complain. Sad. Very sad. I’d rather go back to using “m_” or “_” on member variable names (though “this._widget”, which another StyleCop rule requires, is a bit silly).

Well, enough ranting. I’ve got work to do.

Read Full Post »

Ran across this MSDN page while searching for something else today. A “Community Content” response by “mike_msdn” asked the following question:

If NewItemEvent fires twice or more while the consumer thread is busy (i.e. not wating on WaitHandle.WaitAny), then won’t the consumer thread only be called once, missing the other calls?

The short answer to this is “yes”. Like many other code samples in MSDN, this one is utter crap. Just riddled with bugs. Be careful you understand the code when using any examples you find on the Internet, even if the source is one that should be authoritative.

(An interesting aside: there are folks at Microsoft who understand this topic, yet we still get broken samples in the MSDN.)

Like the title of this blog says, I’m going to go out on a limb and say that EventWaitHandle should be considered harmful. Actually, this isn’t the first time I went out on that limb, as when I worked on Boost.Threads I constantly had to explain how Win32 synchronization event objects were dangerous little buggers that should be avoided like the plague. I wish the things didn’t exist, as 9 times out of 10 (or worse) when someone uses an EventWaitHandle, they shouldn’t have. It’s a rare scenario in which an EventWaitHandle can be safely used, and an even rarer scenario in which it’s the solution you should choose.

So, what’s wrong with EventWaitHandle? It doesn’t address the issue of synchronizing access to shared state. This means that if shared state is involved you have to use some other synchronization mechanism in conjunction with the EventWaitHandle. However, this introduces race conditions between the wait and the lock. This is a subtle race condition as well… one that will go undetected for years, and then fail miserably at the worst possible time. Don’t believe me? Do a Google search on how to implement a “condition variable” on Win32. A condition variable is a special synchronization concept that combines the “unlock-wait-lock” set of operations into a single atomic operation, avoiding the race conditions I’m talking about here. When you do the Google search, the first thing you should notice is how complicated many of the solutions are. That should be enough to convince you that EventWaitHandle is the wrong solution in these scenarios. If not, really start to dig into the search results and see how most of those implementations have been proven to be broken. Then look at the implementation in Boost.Threads. If you can understand that implementation, then you have enough knowledge to safely use an EventWaitHandle… but you’ll also know better than to do so ;).

Back to the question by “mike_msdn". If the sample code is broken, how would you implement a producer consumer? Wikipedia has an entry on this. You could decide to use a Semaphore solution, as the article shows. This requires two Semaphore objects and a lock. Unlike EventWaitHandle solutions, when coded correctly, there’s no race condition between the Wait on the Semaphore and the lock, because we take advantage of the semantics of the Semaphore count. The other solution, and the simplest solution, is to use a “monitor”. Remember that “condition variable” I talked about before? Well, a “monitor” basically marries a “condition variable” and a “mutex” into a single concept. The .NET runtime has supported this concept since the beginning, with the Monitor static class. Here’s partial code to “fix” the buggy MSDN code, using a monitor (you should be able to figure out the missing pieces of code… I don’t have the time right now to create a fully working sample).

public class SyncObject
{
    public bool exit;
}

public class Producer
{
    private readonly Queue<int> _queue;
    private readonly SyncObject _sync;
    public Producer(Queue<int> q, SyncObject sync)
    {
        _queue = q;
        _sync = sync;
    }
    public void ThreadRun()
    {
        int count = 0;
        Random r = new Random();
        lock (_sync)
        {
            while (true)
            {
                while (_queue.Count >= 20)
                { // This loop waits for the consumer
                    Monitor.Wait(_sync);
                }
                Monitor.Wait(_sync, 0); // Release the lock to allow exit flag to be set
                if (_sync.exit)
                {
                    break;
                }
                _queue.Enqueue(r.Next(0, 100));
                Monitor.Pulse(_sync);
                count++;
            }
        }
        Console.WriteLine("Producer thread: produced {0} items", count);
    }
}

public class Consumer
{
    private readonly Queue<int> _queue;
    private readonly SyncObject _sync;
    public Consumer(Queue<int> q, SyncObject sync)
    {
        _queue = q;
        _sync = sync;
    }
    public void ThreadRun()
    {
        int count = 0;
        lock (_sync)
        {
            while (true)
            {
                while (_queue.Count == 0 && !_sync.exit)
                {
                    Monitor.Wait(_sync);
                }
                if (_sync.exit)
                {
                    break;
                }
                _queue.Dequeue();
                Monitor.Pulse(_sync);
                count++;
            }
        }
        Console.WriteLine("Consumer thread: consumed {0} items", count);
    }
}

Like I said, the Monitor stuff has been in .NET from the very beginning. It’s sad that very little code makes use of it. It’s scary that a lot of code that doesn’t is instead relying on buggy constructs, often using EventWaitHandle objects.

Read Full Post »

Snippet Designer

Just ran across this. Microsoft has released an internal tool called Snippet Designer. This is a Visual Studio 2008 Add-in, and is available from CodePlex.  Snippets are invaluable in Visual Studio.  Especially with WPF (does anyone really hand code a Dependency Property?). However, creating, editing and managing snippets has never been all that fun. I’ve tried numerous GUI applications that were meant to help you out here, but they never really worked all that well. In the end, I’ve always just resorted to doing things by hand, which works, but is a large enough barrier to entry that I don’t fix little problems in my snippets as often as I should.  This tool looks very promising, though it has some issues.  For instance, it fails to find my user snippets.  It appears the code expects to find a %LocalAppData%\SnippetIndex.xml file, but there is no such thing. Also, when I open a Microsoft supplied snippet, the Error List complains about the snippet expansion macros.  That’s sort of a cosmetic issue, but still one that bothers me.

Read Full Post »

The “var” controversy

There’s some blog buzz going on right now about the appropriateness of using the new C# "var" keyword.  I first ran across the meme from Jean-Paul S. Boodhoo’s blog, with this post.  He later linked to a post by Ilya Ryzhenkov on the same subject.  One of the responses on Ilya’s blog read:

"The upshot here is that vars generate some serious code – all for good reason when using LINQ. But NOT for a good reason if you’re being lazy – which is the point of this whole post. If you find yourself using “var” anywhere that’s not within a LINQ statement, it’s probably not a good idea."

This response was quoting a post by Rob Conery.  Let me first say, I have not read all of Rob’s post (mostly because the formatting is so bad, it makes it hard to read the post, and I don’t have the time to spend on the effort).  Maybe this quote is taken out of context, so take what I say next with a grain of salt.  This quote is utter hogwash.  The "var" keyword produces no extra code.  Prove it to yourself.

public class Foo
{
}

class Program
{
    static void Main(string[] args)
    {
        Foo explicitFoo = new Foo();
        var inferredFoo = new Foo();
    }
}

The resultant IL that’s generated is this.

.method private hidebysig static void Main(string[] args) cil managed
{
    .entrypoint
    .maxstack 1
    .locals init (
        [0] class Playground.Foo explicitFoo,
        [1] class Playground.Foo inferredFoo)
    L_0000: nop 
    L_0001: newobj instance void Playground.Foo::.ctor()
    L_0006: stloc.0 
    L_0007: newobj instance void Playground.Foo::.ctor()
    L_000c: stloc.1 
    L_000d: ret 
}

The code for the explicitly declared variable and the inferred though "var" variable is identical.  Do NOT fear using "var" because of performance concerns, as there is none.

With that out of the way, where do I fall in opinion on this subject?  Well, reading the various posts in this meme, there seems to be two camps.  I think both are extremes.  The first extreme is the "Microsoft Camp".

“Overuse of var can make source code less readable for others. It is recommended to use var only when it is necessary, that is, when the variable will be used to store an anonymous type or a collection of anonymous types.”

I simply can’t agree with this extreme viewpoint.  Tell me how the following code can possibly be considered less readable for others?

var inferredFoo = new Foo();

The other camp, which I’ll call the Boodhoo camp, though I don’t have proof that Mr. Boodhoo specifically takes this extreme point of view, believe that you should always use "var".  I can’t agree with that extreme either.  Can anyone tell me what the type of the following declaration is?

var current = Foo.Current;

We can have arguments until we’re blue in the face about how better naming would have prevented this confusion.  I don’t buy the argument, though.  First, names aren’t always under your control.  Second, even with better naming it’s still possible to find yourself in situations where you don’t have enough type information available to you in situations like this.  C# is still a strongly typed language, and knowing the exact type your dealing with is important.  Relying on the IDE is a no go for me, and relying on naming isn’t always possible.

So, what do I think?  Out of habit, I’m still not using "var" that frequently, but I see no harm in using it for your typical "new" statements like the first example.  I don’t know if I’ll get into the habit of doing that or not, but I see no reason to try and talk anyone out of doing so.  For other declarations like the second example, unless the type is anonymous, I’d probably favor the Microsoft guideline of not using "var" here.  You can probably get away with it 80% of the time and I won’t care, but that other 20% is enough reason for me to not recommend getting into this habit.

Edit:  From a reply to Ilya’s post by "Simon" we get a list of rules much closer to what I think makes sense.

  • Do use var for anonymous types
  • Do use var for initialization from constructors (var list = new List();)
  • Do use var for casts (var list = (IList)list;)
  • Consider using var where naming implies the type of the variable (var xmlSerializer = GetXmlSerializer();)

The last bullet point is the most controversial, but I can agree with it as long as developers are consciously considering the choice.  For the rest, I can see no reason to recommend not using "var" in any of those situations.

Read Full Post »

I’ve been working for some time now on implementing a framework to enable coding WPF applications that follow the Model-View-ViewModel pattern.  I’ve found several "dark corners" of the WPF libraries where the designers have made choices that make it difficult to extend the library.  For instance, the constructors on the routed event arguments are not public, making it very difficult to reroute events back into the ViewModel.  It’s been frustrating, but you could always understand why the design was the way it was, even if I thought it was a mistake that should be rectified in some later release.
 
However, I just ran into one that just astounds me.  I’ve been trying to implement a scheme in which the ViewModel would be able to effect navigation with no knowledge of the View.  It’s similar in concept to the Java Struts concept where navigation information is stored in a configuration file and is initiated via keywords instead of actual knowledge of the view (if that made sense to anyone).  Anyway, I had something working fairly well, and reached the point where I had to start worrying about getting "return values" back to the VieModel.  This is when I discovered something interesting.  When you follow the normal pattern for handling PageFunction code, like this:
 

class MyPayge : Page

{

  // … other stuff removed
  void Navigate()
  {
     MyPageFunction f = new MyPageFunction();
     f.Return = MyPageFunction_Return;
     NavigationService.Navigate(f);
  }
 
  void MyPageFunction_Return(object sender, PageFunctionEventArgs<int> e)
  {
     // what ever
  }
}
 
(That was done by memory and likely has several signature and syntax issues… it’s for illustration only.)
 
Knowing how delegates work, you’d expect that the addition of the MyPageFunction_Return would cause the MyPageFunction instance to maintain a reference to the MyPage instance, keeping it alive, and that’s what the method would be called on.  A little experimentation, though, proved this to be wrong.  The old MyPage instance is collected, a new one is created, and that’s the instance the MyPageFunction_Return is called on.  I have no idea how it’s even possible to do that!  I started to do a Google search, and that’s when I discovered something interesting.  Did you know the documentation for the Return event indicates you must only handle the event from a method on the calling page?  If you try and handle it somewhere else, an exception will be raised!  Seems to me they implemented some hack to allow the event handler to be retargeted to the new Page instance when you return (this is the default behavior, unless you assign KeepAlive="True").  This hack either required or made it easier for them to document that you must target the calling Page with any event handler.  Does that sound like a hack to you, or what?  Worse, it leads to many dead ends when using PageFunction.  My scenario tries to handle the return event in the ViewModel, but the situation I foundusing Google was trying to encapsulate all of the functionality in a custom control, which seems like a very common thing to want to do.
 
I love WPF.  It makes a large part of the development effort simple, while allowing you to still do amazing things with your UI.  However, there’s enough of these dark corners to frustrate the heck out of me.

Read Full Post »

Older Posts »