How to remap calc.exe to scriptcs.exe

Lately I found myself launching ScriptCS more and more to do simple calculations. I half-jokingly said on twitter that I’d better remap calc.exe to scriptcs.exe on my machine. However it seems that my joke tweet was taken seriously by some people, and I was asked how this was done. So here goes!

For this next trick I will use my most favorite Windows trick – the Image File Execution Options (IFEO). I’ve blogged about IFEO in the past, it’s generally used to allow attaching a debugger to a process before it starts, but can also do other useful things, such as replacing the Windows Task Manager with Process Explorer, or even disabling some processes launching, which is useful to prevent Narrator in Windows 8 from launching via the Win-Enter key.

So here is how to remap calc.exe to launch ScriptCS instead. First, locate on your machine where scriptcs.exe is installed, as we need the full path. You can use the command where scriptcs.exe in CMD to find it. If you don’t have scriptcs.exe in path, best install it via Chocolatey (you’re welcome!)

  1. Open the registry editor (regedit.exe), and navigate to:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options
  2. Create a new key called calc.exe
  3. Inside the newly created key, create a new String Value called Debugger
  4. Double-click Debugger, and set c:\path\to\scriptcs.exe -repl as the value

That’s it, from now on when you launch calc.exe, scriptcs.exe will open instead! To undo this, simply delete the key calc.exe from the registry path above.

P.S. if you have Windows SDK installed, you can use the utility gflags.exe to do this instead:

  1. Launch gflags.exe (need to be launched elevated)
  2. Go to the Image File tab
  3. In the Image text box, write calc.exe and press TAB (I know)
  4. Down at the bottom, under Debugger, write c:\path\to\scriptcs.exe -repl and press OK

Bonus: now replace devenv.exe with scriptcs.exe!

Happy hacking!

Roslyn! It’s got Electrolytes!

Disclaimer: these are some random musings (read: rant) on the subjects of Roslyn, ReSharper and the future of development tools. While I’m heavily biased towards one of the R’s, I’ll try to keep an open mind. Or not.

Not 10 minutes passed since the Roslyn open-sourcing announcement at Build 2014, which by itself is a monumental, historic event for Microsoft and the .NET community worldwide, some people already started predicting the demise of ReSharper. It’s been a week or so, and JetBrains finally posted a Q&A where they explain their reasons for not moving to Roslyn, as predicted, at least not in the foreseeable future. Which, in turn, caused yet another sea of comments, both supporting and condemning the decision.

Now, why do I care?

I’ve been a developer for Visual Studio tooling for a while now. Anyone who had ever attempted to build a Visual Studio extension knows very well the limitation of certain APIs. In particular, the Visual Studio automation model (called DTE), and the code model – both very old, at times unpredictable APIs. The only sources of information, other than MSDN and associated forums, are handful of people who have years of painful experience developing on top of those APIs. Asking a Visual Studio Extensibility question on StackOverflow will almost always yield an answer from the same handful of people. You get to know their names, avatars and positions at various divisions at Microsoft, where they work or worked at some point.

Needless to say, developing Visual Studio extensions is hard. It’s made harder by the fact that each Visual Studio version is different, has different features and own little quirks. To maintain the same, consistent experience throughout all Visual Studio versions requires painstakingly maintain different versions of the same code base, using #if directives, partial classes, linked source files, duplicated files… the list goes on. It’s a “dark art”, known only to few dozen people, and it’s been this way for a long time. Until now.

Enter Roslyn.

When Roslyn was publically announced back in 2011, it promised to change that. Initially described as a “compiler-as-a-service”, Roslyn became a promise of a better way developers could interact with what is traditionally a “black box” – a vast process that happens behind the scenes in Visual Studio. Developers now have a chance to use a modern, clean API to do things previously considered impossible on the .NET platform – from turning C# into a scripting language and compiling C# in Swedish to creating “quick-fixes” and refactorings easily. It’s so easy, in fact, Dustin Campbell did it live on stage at Build in about 15 minutes, which was previously unheard of!

The possibilities that are now open to .NET developers are endless, but so are the realities, which is the point (if any) to this post. First, let’s peek into the Roslyn box. Roslyn, officially known as the .NET Compiler Platform, is comprised of several parts, mainly the compiler model (syntax tree and semantic model, code parsers and generators), and a set of diagnostic APIs for building analysis tools, refactorings and quick-fixes. These APIs are open-source, and are available for everyone.

However, there’s a second part of the Roslyn story, and it’s the Visual Studio integration and features, currently in Preview stages. Those are not currently open source, and it’s unknown whether they will be at some later stage. Those include base types and internal implementations of refactorings and quick-fixes that ship with the Preview, as well as numerous services that are tied directly to core services such as debugger, code editor and other Visual Studio internals. You can read a great overview of what’s in the Roslyn box in this post by Schabse Laks (another name in the “dark art” crowd).

So what does this have to do with ReSharper? Well, simply put, for almost a decade ReSharper provided Visual Studio users with features that were only available on other platforms, other IDEs. Over the years, as new Visual Studio versions got some of ReSharper’s features built into it, ReSharper relentlessly continued to innovate with better static analyses and code fixes, while making sure all those innovations were all supported in older versions of Visual Studio. The latest version of ReSharper (v8.2 at the time of writing) supports almost all the same features on Visual Studio 2005 throughout 2013.

As someone who understands the intricacies of supporting different versions and architectures of Visual Studio I am dumbfounded by the amount engineering effort that went into this. Years of abstracting, refining and stabilizing the layers that make Visual Studio disappear completely, from ReSharper’s point of view, allowing creating plugins for ReSharper that will “just work” on all supported Visual Studio versions. And if you need to find out how something works? Decompile it! They want you to do it, and even give you a tool for this.

So should ReSharper switch to Roslyn? JetBrains don’t think so. I don’t think so, why would they? ReSharper’s already got a George – the entire ecosystem is there, have been there for a decade, constantly improved. You don’t just replace it. As developers, we have a tendency to jump on the new shiny, but honestly, how often do you switch out your database? Architecturally, Roslyn and ReSharper differ enough to make this change infeasible, not to mention dangerous – it would require almost a complete rewrite of ReSharper, and as Joel once said, this is something you should never do.

In conclusion, if imitation is the best form of flattery, I’d say the guys over at JetBrains are plenty adulated. The internet is abuzz with talks of Roslyn and ReSharper, and this also must be good for SEO. However, ReSharper isn’t going anywhere, they will continue to do what they do best. And Roslyn? Roslyn will allow for developers who were previously afraid of “black boxes” to develop great tools for their fellow developers. Out of thin air, I predict that within 2 years there will be an explosion of productivity tooling in Visual Studio galaxy, and we’re witnessing its birth. But time will tell.

How Nancy made .NET Web development fun!

Let’s get this out of the way first: this is not a post about how to use Nancy – there are lots of blog posts out there, written by people far better at it than me!

I am not a web developer. During my career I mostly worked on desktop applications (with occasional dab in the database land, typical CRUD stuff). Over the years my passion for development shifted towards building development tools (Visual Studio, ReSharper plugins), and my dayjob now having a blast building OzCode, a debugging productivity tool for Visual Studio.

So I never had the chance to “do” web development. Every time I tried to do it, I gave up quickly, because I could never get the hang of it! I don’t know JavaScript, and ASP.NET (even MVC and even Web API) make a lot of assumptions about how to structure and build web apps. Same goes for other web frameworks and languages – they’re all great(!), but just not for me. Anything I ever tried to build, I quickly gave up (due to being stuck, or otherwise losing interest).

Enter Nancy.

When I first heard about Nancy, I was intrigued – an entire web application that fits in a tweet! If, in a highly unlikely event, you’re reading about Nancy for the first time in this blog post, here’s the canonical “hello world” app:

public class SampleModule : Nancy.NancyModule
{
    public SampleModule()
    {
        Get["/"] = _ => "Hello World!";
    }
}

And with ScriptCS being the hot new thing, hosting Nancy apps does not even require Visual Studio!

So what makes Nancy so appealing to me, a non-web developer? First and foremost – the people behind it! Nancy is a perfect example of the Open Source spirit – with more than 160 contributors, making it a fun, well-documented, well-tested, accessible framework for everyone and everything!

But what I like most about Nancy has nothing to do with its actual application – it has to do with the way it works under the covers. Demonstrated by one of Nancy’s lead developers, Andreas Håkansson (@TheCodeJunkie on twitter), in the Guerilla Framework Design talk at DevDay, Nancy uses lots of cool C# language hacks to achieve simplicity and make using of the framework as simple as possible!

Want examples? Let’s start with the “hello world” app above: there are at least 3 cool things that don’t require you to understand exactly how they work. For example, the funky = _ => “smiley face”. While aesthetically pleasing, this is a lambda expression, defining the body of the HTTP Get method, with the route “/”. This lambda passes a parameter which is a dynamic dictionary (specified by underscore in the above example). This dictionary contains, among other things, the URL parameters, so they can be used immediately inside the method body, e.g.:

Get["/greet/{name}"] = parameters => "Hello " + parameters.name;

Here, the underscore has been replaced with the named argument parameters, and now, everyone who issues an HTTP GET to the address /greet/Igal, for example, the value “Igal” will be captured in the parameter name in the dynamic dictionary, and will just be there, available to use!

Another thing to note is that the lambda returns a string, but it somehow renders fine in the browser. The Get property (along with other HTTP verbs that Nancy provides) expects a Func<dynamic, dynamic> as a return result of the lambda, so it doesn’t really matter. The conversion is taken care under the hood of Nancy, which defines a lot of implicit conversions to covert strings to valid response content, and integers to valid HTTP codes! This means that you can simply “return 404;” from a body of a Nancy route, and it will auto-magically transform into HttpStatusCode.NotFound that Nancy understands.

Everything in Nancy is expandable, configurable and overridable. Nancy is built with testability, extensibility and pluggability in mind, which means that almost everything you need it to do (support another View engine, hosting platform, custom IoC container, etc.) is just a NuGet package away. Which is just awesome, as there are tons of things you can add to Nancy.

Finally, no post of mine can be complete without mentioning tooling. I love productivity addins, and I believe there should be a tool for anything! This is why I created a plugin for ReSharper, bringing some of ReSharper’s goodness to Nancy! It adds support to navigating and creating to Views, code completion and other validations. Everything that was until now available only in ASP.NET MVC (with ReSharper’s help), is now also available for Nancy!

In conclusion, if you’re dabbling in web development, you should definitely give Nancy a try! It’s the only framework that made someone like me, a complete web development noob, be productive and actually create something useful!

Happy hacking!

Installing a .vsix Extension via MSI without ‘devenv /setup’

There are two ways to install Visual Studio extensions: via VSIX, a .zip file with a .vsix extension, installed from the Visual Studio gallery, or by double-clicking, which executes VSIXInstaller.exe, or “manually”, by installing the files from a custom installer, typically MSI. The latter approach is generally used if the extension needs to perform additional tasks, such as running ngen or registering COM servers.

TL;DR: it is possible to register custom Visual Studio extensions by “touching” (modifying the last accessed date) of the file extensions.configurationchanged, which is located in the

%VSInstallDir%\Common7\IDE\Extensions

directory (in Visual Studio 2012 and above). This will cause Visual Studio to reload all the packages – in essence, this is equivalent of installing the package using VSIX from Visual Studio gallery. No devenv /setup required!

The official Visual Studio guideline suggests that by using the MSI approach, the installer is required to run devenv.exe /setup after installation, to refresh Visual Studio package caches and settings, in order for the custom package to be installed. This approach, however, has several drawbacks and disadvantages to the VSIX installation.

The first and foremost – speed. Running devenv.exe /setup rebuilds the entire Visual Studio settings from scratch, which takes significant amount of time, depending on the machine. This could be improved by running the command with the /nosetupvstemplates switch, if the package you’re installing does not install any templates.

The second issue is stability – running /setup on the user’s machine may sometimes cause other extensions not to register correctly, and cause other extensions to fail to load. As a commercial product, this is obviously a problem, since this usually results in a support ticket, followed by an uninstall of your software (whether to attempt to repair the situation before reinstall, or worse, remove and never try your software again).

Failing to find an adequate solution, I asked on StackOverflow about the possibility of migrating to a proper VSIX installation, with several limitations. In the end, after an extensive research of the problem, I discovered a solution which solved my immediate issue: VSIX installer, after unpacking the files into the target directory (it’s a .zip file, after all), will touch a file located in the Extensions directory root, called extensions.configurationchanged. The next time Visual Studio restarts, if it detects a date change in this file, will re-register all the .pkgdef files it finds. This significantly reduces both the amount of time Visual Studio takes to load (by not rebuilding the entire configuration cache), as well as risk of breaking the environment.

This is an unofficial, unsupported scenario. But, this is what VSIX installer does, and I see no reason not to do this myself. Beware, and use at your own risk!

Dragging and dropping files and folders into .vsix

Visual Studio Extensions, or VSIX files are simple ZIP archives following the Open Package Conventions, and have a .vsix extension. Double-clicking on a .vsix will install it into Visual Studio, by opening it with VSIXInstaller.exe.

TL;DR: IF you want to be able to drag files and folders into .vsix, there’s a registry tweak you can apply – add Windows Compressed (zipped) Folders  drop handler’s GUID to the .vsix entry under HKCR\.vsix. Create the subkeys shellext\DropHandler:

HKEY_CLASSES_ROOT\.vsix\shellex\DropHandler

Set the value of (Default) inside DropHandler to {ED9D80B9-D157-457B-9192-0E7280313BF0}, restart explorer.exe, and voilla! You can now drag files or folders into .vsix files, as if they were named .zip.

// output:verbose

I wanted to drag some files into the VSIX package, but it didn’t work – Windows has no idea that a .vsix is actually a .zip file:

Dragging files and folders onto other files or folders is handled by Shell Drop Handlers (and writing one in .NET is made incredibly simple by using SharpShell by Dave Kerr). Instead of writing one for .vsix, I wanted to make the default one, the one that handles zip files (known as Compressed Folders in Windows), treat .vsix as .zip archives. For this, I needed to assign zip’s Drop Handler to .vsix. Since Drop Handlers are Shell Extensions based on COM, it must mean they have a GUID. And to list all Shell Extension GUIDs, we can use a nice little utility by NirSoft called ShellExView.

Upon running ShellExView, we’ll get a listing of all the Shell Extensions installed on our system. We know Windows has one for zipped files, pressing Ctrl-F and searching for ‘zip’ takes us to to the Compressed (zipped) Folder DropHandler. We need the GUID for this handler, so double-clicking on the entry opens its Properties pane, where we can copy the GUID entry.

Last step is adding this GUID as a valid Drop Handler for .vsix files. Steps to do this are described in the TL;DR above.

After restarting explorer.exe (or rebooting), we can now drag and drop files onto .vsix files!

Happy hacking!

[FW]orking in Github–a slightly better workflow?

If you don’t understand the title, I tried to be clever and used regular expressions. I now have 2 problems.

I’ve been using git and github for a while now, but only recently found out it’s possible to define 2 separate URLs for fetching and pushing. Often times, when I wanted to contribute to an open source project, I had to go through the ceremony of a) forking the repository b) cloning the fork to my machine and c) define an upstream remote to keep the fork in sync with the changes from the original repository.

Defining the upstream repository is my least favorite part of working with git/github – I always have to look up the steps how to do it. Instead of keeping the fork in sync, I just want to be able to fetch the changes from the original (upstream) repository, but push the changes into my own fork.

Turns out, git supports this scenario! Normally, when you type git remote –v, you get the following output:

> git remote -v

origin git@github.com:username/MyForkedRepository.git (fetch) 
origin git@github.com:username/MyForkedRepository.git (push)

As you can see, the remote ‘origin’ defines 2 URLs, one with label fetch and one with push. Let’s set a different URL for the original, upstream repository (the one to fetch from):

git remote set-url origin git@github.com:NancyFx/Nancy.git

And another URL to my own fork (the one to push to):

git remote set-url --push origin git@github.com:hmemcpy/Nancy.git

And that’s it, I can now fetch and push normally, without having to worry about which remote I’m using!

N.B.

You can also do this by directly modifying the .git/config file, and adding a separate pushurl value under [remote "origin"]:

...
[remote "origin"]
 fetch = +refs/heads/*:refs/remotes/origin/*
 url = git@github.com:NancyFx/Nancy.git
 pushurl = git@github.com:hmemcpy/Nancy.git
...

Alternatively, if you’re using GitExtensions on Windows, you can go to RepositoryRemote repositories in the menu, and check the Separate Push URL checkbox:

How to change two-finger scroll direction in Synaptics Touchpad

For some reason, the default setting for the two-finger scroll is reversed: in order to scroll down, you need to move your two fingers up on the touchpad. Fortunately, it’s easy to fix, but not trivial to find the setting. Here’s how:

  1. Go to Synaptics Pointing Device options (either right-click the icon in the notification area, or right-click the desktop, select Personalize, then Change mouse pointers)
  2. Go to Device Settings tab, and make sure the “Synaptics LuxPad V8.1” is selected
  3. Press Settings, and in the new dialog that opens, select Two-Finger Scrolling, then click the cogwheel button
  4. Uncheck the Enable reverse scrolling direction checkbox, and close all the dialogs

That’s it!

Code, music and single malt whisky!