Friday, February 10, 2017

How to get Open Live Writer working with Blogger

Wow – I didn’t blog at all in 2016!  While I’m certain that falls squarely on me, perhaps a small part of it is related to a bug connecting to Blogger (formerly Blogspot) blogs in my favorite blogging software – Open Live Writer.  This blog post explains the workaround and how I figured it out.  Hopefully this issue will get patched by the Open Live Writer team and the workaround won’t be needed in the future.

The Issue

Trying to connect to a Google Blogger blog with Open Live Writer v0.6.0.0 doesn’t work (this is the latest version as of this post).  Clicking the Sign In button brings the user to this URL:
Notice the client_id value “PASTE_YOUR_CLIENT_ID_HERE” in that URL!  That’s definitely not right, and it breaks the attempt to authenticate to Google via OAuth.

The Workaround

I tried installing several different versions of Open Live Writer via Chocolatey.  I found that the issue is also present in versions v0.5.1.4, and v0.5.1.3, but version v0.5.1.2 works.  The best way to get Open Live Writer working with Blogger is to install Chocolatey, and then run this command:

choco install openlivewriter -version -force


That will install the correct version of Open Live Writer that we need to authenticate. However, you can’t run Open Live Writer from the main shortcut. Open Live Writer uses Squirrel which is a tool that assists with automatically updating software when it is launched. Almost all the time this is great, but for our purposes it will not help since we want to run the exact version we’ve just installed.


So run this, and you should be able to authenticate with Google’s Blogger service via OAuth:




Once you’ve authenticated to Blogger, you can quit out and then use the Start Menu shortcut. This will run the Squirrel updater, and your system will be updated to the latest version which is v0.6.0.0 as of this writing.


How I Figured This Out


It was a bit of brute force. First I cloned the repository for Open Live Writer from here. Then, I opened up the solution in Visual Studio 2015 Community Edition and searched for the string “PASTE_YOUR_CLIENT_ID_HERE” in all files. This led me to the src/managed/OpenLiveWriter.BlogClient/Clients/GoogleBloggerv3Secrets.json file. When I noticed that the file was called “Secrets” it made me think that perhaps this was a compilation/deployment issue. I looked into the .gitignore file at the root of the repo, and sure enough the GoogleBloggerv3Secrets.json file was specifically ignored.


Therefore, my theory was that the person who packaged up Open Live Writer versions v0.6.0.0 didn’t have the Secrets file updated on their machine. This leads to the idea that probably someone in the past DID have it updated, so I tried successive earlier and earlier versions until I hit on v0.5.1.2 working (I would have tried a bisect approach but there were only four historical releases).


Why was this broken?


It’s good that the Open Live Writer team moved the Google Blogger secrets out of their public repo. However, this makes it difficult for individuals to build a complete version of the app. A good solution could be to use a CI/CD build solution – I wonder if the Microsoft team might look into Visual Studio Team Services? I suspect they could get a good deal from that vendor.


I wrote up the bug in Issue #561.

There is still an issue with using images and Open Live Writer with a Blogger account due to the Picassa shutdown.  I will post again if I find a workaround for that issue.

Monday, December 21, 2015

Fixing Truncation Issues with T-SQL Flex

If you’ve worked with SQL Server for any length of time, you’ve run into this error:

Msg 8152, Level 16, State 14, Procedure MyProcedure, Line 162
String or binary data would be truncated.

While the problem is simple enough - you’re trying to put a big value in a small field - it’s a maddening error because SQL Server doesn’t give you a hint as to which field or value is causing the issue.

It’s not even so bad when you’re copying data from a well-known source (like many fields from a single table), but sometmes you’re copying data from an arbitrary query.  Having to look up the data sizes from across your database (or worse - figuring out the data types from calculated fields or functions/views/etc.) can be terribly time-consuming.

I wrote a tool called T-SQL Flex that can help.  It’s a plugin for SQL Server Management Studio that generates scripts for any arbitrary SQL query.  It has many uses, but one I like is helping to fix “String or binary data would be truncated” problems.


  1. You’re working in a dev environment and have good + tested backups
  2. You understand the risks of “following instructions on some guy’s blog”.
  3. You have T-SQL Flex installed in your SSMS.
  4. You have a query that is giving “String or binary data would be truncated” errors.

General process for fixing a truncation issue with T-SQL Flex:

  1. Find the statement in your query that is causing the error.  The easiest way is to comment out everything but the first statement (Highlight, then CTRL+K, then CTRL+C).  Then run and see if it bombs.  If not, uncomment the next statement (Highlight, then CTRL+K, then CTRL+U) and repeat.  Eventually you should get the error.  Please note that you will have to be smart about things like branching logic (IF/ELSE), changes being made to permanent tables, and any sort of transaction work (BEGIN TRAN/COMMIT/ROLLBACK etc.) – that’s all out of scope for this blog post. If your procedure is really big, you may wish to search by halves.
  2. Once you’ve found the offending statement, do two things:
    • Add a new SELECT TOP 0 statement to query the schema of the destination table.  If your INSERT statement had field names, use them in the SELECT statement.  If not, it should be SELECT TOP 0 * FROM (whatever the destination is).  This should work for regular tables as well as #temp tables and @table variables.
    • Comment out the INSERT INTO part of the offending statement, and leave the SELECT part there.

    For example, if the offending statement that you found in step 1 was:

    INSERT INTO #MyTempTable (field1, field2, field3)
    SELECT productId, price, [description]
    FROM #MyOtherTempTable
    WHERE productId = @productId;
    --rest of query is still commented out.

    You should convert it to this:

    SELECT TOP 0 field1, field2, field3 FROM #MyTempTable;
    --INSERT INTO #MyTempTable (field1, field2, field3)
    SELECT productId, price, [description]
    FROM #MyOtherTempTable
    WHERE productId = @productId;
    --rest of query is still commented out.

  3. Now you should be ready to run your statement in T-SQL Flex.  With the above modifications, the output from T-SQL Flex after clicking Run ‘n’ Rollback should be something like this:

    CREATE TABLE #Result1 (
      field1 INT NOT NULL,
      field2 DECIMAL(18,5) NOT NULL,
      field3 NVARCHAR(100) NOT NULL


    CREATE TABLE #Result2 (
      productId INT NOT NULL,
      price DECIMAL(18,5) NOT NULL,
      [description] NVARCHAR(200) NOT NULL


    Notice how in the above results, field3 in the first table (our destination) has a NVARCHAR(100) column, but the corresponding field in the second table (our data source) is NVARCHAR(200).  In this scenario, that is likely our problem – a value that fits in a NVARCHAR(200) field may not fit in a NVARCHAR(100).  You still have to compare the type of each column in the source to each column in the destination, but T-SQL Flex does all the leg work for you.  If there are a lot of fields, it shouldn’t be too bad to paste the temp table definitions side-by-side in Excel.

  4. Once you’ve identified the mismatched fields, you should either fix the capacity of the too-small fields in the destination, or else explicitly truncate the source data (such as specifying LEFT([description],100)).  Put the statement back to how it was, and now it should no longer raise the truncation error.
  5. Once that statement is fixed, you may wish to try the whole query again to see if the main issue is resolved (uncomment the rest of the query), or else continue “stepping through” by going back to step 1.

Good luck!

Tuesday, June 9, 2015

Talking TypeScript on the .NET Rocks! Podcast

I appeared on the .NET Rocks podcast show #1149 this week.  I had a blast talking about TypeScript, the new ES6 features coming in TypeScript 1.5, and grunt-ts, our TypeScript compiler task for Grunt.

I wanted to add a few things.

Firstly, here are a number of additional links that are relevant to our discussion:

My Pluralsight Course: Practical TypeScript Migration
Great review of what it's been like working with TypeScript over 24 months:
Destructuring in ES6:
Get started with grunt-ts if you've never used Grunt:
Node Inspector - debug Node.js code in the Chrome Dev Tools - works great with TypeScript also:
Download TypeScript 1.5 beta for Visual Studio 2013 (transpiles much of ES6):
Download TypeScript 1.4 for Visual Studio 2013 (current official release with minimal ES6 transpilation support):

Around the 10 minute mark, I said that you should put type annotations on function calls.  That would be awful – I meant to say you should put type annotations on function declarations.

Around the 20 minute mark, I said that you can use ES6 features today with TypeScript 1.5 because it offers some transpilation features.  Richard then said transpilation was interesting because (paraphrasing) "when browsers come out with finished ES6 implementations, your page should go faster", and I sort of disagreed.  The reason is that transpiling actually changes your code from using one pattern to code using a different pattern.  Richard's statement was correct with regards to polyfills which is something that a dynamic language like JavaScript makes possible.  Developers who create polyfills attempt to implement future features in legacy environments.  For example, in ES6, there is a new `hypot` method on the JavaScript Math object, so Math.hypot(3,4) === 5.  A polyfill could be created for that feature to work in ES5 fairly easily.  A well-written polyfill would check to see if there was already a hypot property on the Math object; if so, it would just fall back to the native implementation which would (theoretically) be faster (though for something simple like a hypotenuse calculation it'd likely be no different).

Features that there's no way to polyfill require transpilation.  For example: class MyClass { }.  There is no (usable) class keyword in ES5, and so there is no way to create something in ES5 at runtime that would allow that code to run and do something reasonable; the syntax just doesn't exist.  That's why the transpilers have to change that code into a function declaration.  Since it becomes a function declaration, there's no way to natively optimize it as a class (beyond general optimizations for constructor function patterns that the engines would already have).  So for transpiled features, if you want to let the engine act natively, you'd have to dynamically serve the right code to the right engines - or (more practically) just serve the lowest common denominator like ES5 to all, which generally works great in today's browsers.

// Here's a cool destructuring example:
// This is valid TypeScript (in 1.5) and JavaScript (ES6)
// TypeScript will compile it down to work correctly with ES5
//  by creating a "normal" parameter in the signature of doSomething
//  and add code in the body to pull out its baz property.

var myObject = {
  foo: 1,
  bar: false,
  baz: "hello"

function doSomething({ baz }) {

doSomething(myObject); // hello

I said something silly around the 36 minute mark: "convert the inside of a tight loop first"; that's a very bad idea and doesn't even really make sense. :-(  What I should have said is "convert something central to your application first - like a common utilities library or a configuration file".  The reason for converting "from the inside out" as I recommend is that each new thing you convert will infer many types from the things that have already been typed.  As you grow the scope of TypeScript in your application, you will begin to find that you need to annotate less and less (other than function signatures) - and the things you do need to annotate may even be errors.  I feel that annotating function signatures is something that ultimately helps you write better code since it leads to compiler-enforced documentation, and it's usually necessary if you've activated --noImplicitAny.

Around the 51 minute mark, I was talking about how TypeScript doesn't help people enough with regards to componentizing projects of medium complexity and I misspoke a few times by using the word Solution (meaning a Visual Studio Solution) when I meant to say Project (meaning a Visual Studio Project).  TypeScript works great within a single Visual Studio project - the trouble comes when you want to use TypeScript across projects (like in a large/complex solution).  There are many hacky workarounds to this, but I still don't know of a great "this is the obviously right way to do it" story; I hope that the TypeScript team and the community will eventually put together a great solution for this.

I had a great time appearing on the podcast – thanks for the opportunity!

Wednesday, January 14, 2015

Excel 2013 Conditional Formatting for Columns

If you have two columns in an Excel sheet, and you want to conditionally format fields in the second column when the values are not equal, do this:
Our starting point – we want Grape to be highlighted because it is not the same as Cherry.
Select all of column B by clicking the header.  On the Home tab, click Conditional Formatting… New Rule.
Choose “Format only cells that contain”, and “Cell Value” “not equal to” “=A1”.  Note that putting the “=” is very important because otherwise Excel will think you’re comparing it to the string literal “A1”.
Then click “Format…” and select the appropriate formatting (such as setting a fill color).
Then click OK and OK.
Note: If you don’t want the header to be highlighted, you can apply the conditional formatting to only the cells with actual data, but you will have to adjust the corresponding comparison row in the formula.  In our example, we might apply the conditional formatting to cells B$2$:B$5$ (which starts at row 2), but we’d have to set the formula to “not equal to” “=A2” (also starts at row 2).  If you want to select down to the end of the sheet, your selection formula would be something like =$B$2:$B$1048576 on the Excel 2007+ “big grid”.

Thursday, January 8, 2015

Enabling Portable Git in Node.js command prompt on Windows

GitHub for Windows doesn't put Git in the PATH by default.  If you'd like your Node.js command prompt to have the git command available by default, simply edit your nodevars.bat file.  By default, this is in C:\Program Files\nodejs\.  You will have to run your text editor in an administrative context for this to work.

Replace this line in your nodevars.bat file:
set PATH=%APPDATA%\npm;%~dp0;%PATH%

With these two lines:

for /F %%A in ('"dir /s /b /OD %userprofile%\appdata\local\github\portableGit_*"') do set gitPath=%%A\bin
set PATH=%APPDATA%\npm;%~dp0;%PATH%;%gitPath%

See this gist for an easier-to-copy version:

That will set a variable called %gitPath% with the location of git.exe, and then append it at the end of your path in the Node.js command prompt.  Because of the /OD switch, it will use the version of Portable Git whose folder has the latest modified date.  If Portable Git is not found, you will get a harmless extra ; in the PATH.

Tuesday, January 6, 2015

Using an alternate TypeScript compiler inside Visual Studio

TypeScript team member Daniel Rosenwasser provided instructions for replacing the TypeScript compiler and language service used by Visual Studio here:

He also described how to update the lib.d.ts file later in the same issue:

These instructions are not guaranteed to work, so be sure to back up your original files.

Monday, December 1, 2014

Gmail and are Conspiring to Breakup My Marriage

My wife and I are happily married.  Both of us were surprised the other day when she started getting personals results sent to her GMail account!  She was a bit freaked out that it was some creepy malicious person, but it seems that it was just some end-user carelessness (by someone we don’t know) combined with inconsistent email address parsing implementations between two big companies.  Here's what happened.

An "interesting" feature of consumer GMail is that it ignores dots in email addresses.  So if you send an email to, or, your message will be delivered to the same GMail inbox.  (documented here:  However, any messages that you send are always delivered as the "official" email address - whatever you actually typed in when you signed-up for GMail.

It seems that someone else with a similar email address to my wife signed-up for and entered an incorrect email account.  No big deal – I'm sure this happens all the time – but what's bad is that they used an email address that was the same as my wife's but with no dots.  So it's the “same” for incoming mail, but “different” than my wife’s email address when she sends mail.

We reported the problem, but sadly the robo-support at has been unhelpful so far.  When she tries to explain what is going on, she’s getting back messages from support saying that she’s not the owner of the account (because the email address is different), so there’s nothing that can be done.  But of course she is the owner of the account based on the way GMail handles dots – it’s just not an exact string match.  I can't imagine this is the first time this has ever happened, and you'd think a company as big as would account for an oddity like this for a big email provider like GMail.

My wife and I met on many years ago and we were very happy with their service at that time.  If a human from spots this, please take a closer look at Incident:141126-001205.  (I guess this was the 1205th ticket from November 26…)  We’d love it if you could close out the account that was recently mistakenly registered.  I’m sure that other person has opened a different account already and you could probably verify that by IP.

Email address as a unique identifier – what could possibly go wrong?

Let’s invent our own email standards – because reasons!