Tuesday, July 8, 2014

How to Generate a Fiddler SAZ File

Fiddler is a free program that will log HTTP and HTTPS requests on Windows.  These instructions demonstrate how to use Fiddler to create an archive file (.SAZ) of these requests for troubleshooting purposes using the full version of Fiddler.  As an alternative, there is a simplified version of Fiddler available that only supports capturing here: http://fiddlercap.com.

Warning: Distributing a SAZ file is potentially a massive security risk, especially if HTTPS decryption was enabled during the traffic capture, because any secrets that are sent over the wire (including even passwords or session cookies) will be included in the file.  You can use Fiddler to AES-encrypt the SAZ file using a password via the save dialog, but you must still treat the exported SAZ file (and password) with extreme care!!

  1. If the web site that you need to profile uses HTTPS, please consider getting a dedicated VM set up for installing Fiddler.  This will make trusting the Fiddler certificate (essentially a self-inflicted man-in-the-middle attack) less problematic.  There are instructions on how to do that here if you’re going to be testing with old IE versions.
  2. Go to http://fiddler2.com and download Fiddler.  You can use either the .NET 2 or .NET 4 version depending on which version of the framework you have installed.  Install it accepting the defaults and run it.
  3. Do not perform this step if the site that you need to profile is HTTP-only.  If the site that you need to profile uses HTTPS, open Tools… Fiddler Options… HTTPS tab.  Enable “Decrypt HTTPS traffic”.  When the SCARY TEXT AHEAD dialog comes up, click “Yes” to trust the Fiddler Root certificate, and then click “Yes” again to acknowledge that you want to allow man in the middle attacks on this computer (which is how Fiddler can decrypt your HTTPS traffic (good), and how malicious actors who also have a copy of the Fiddler certificate could too (bad)).  More information is available here: http://docs.telerik.com/fiddler/configure-fiddler/tasks/DecryptHTTPS

    Screenshot of Fiddler Options dialog
  4. If you want to use a browser other than Internet Explorer to generate the SAZ file, you may need to follow additional instructions here: http://docs.telerik.com/fiddler/configure-fiddler/tasks/ConfigureBrowsers 
  5. Minimize Fiddler, and now browse to the site you want to test.  Tab back to Fiddler and you should see the HTTP and HTTPS calls that were made while the capture was running.  For example, here is a visit to the www.google.com homepage.  To see the session details, select the Inspectors tab, and click the “Raw” on both the top and bottom panels – this allows you to see the exact text that was requested from the server by the browser and the browser’s exact response.

    Screenshot of Fiddler showing the left and right hand sides, plus the top and bottom request and response panels.
    In this screenshot, you can see the list of sessions that happened as a result of accessing www.google.com.  When I clicked on a session at random, I was able to see the details on the right-hand pane when I selected Inspectors and then Raw on the top and bottom section.
  6. To export the SAZ file, select the sessions that are interesting in the left-hand pane and click File… Save… Selected Sessions… in ArchiveZip…  Select “Password Protected SAZ” in the “Save as type” field.  “Save the SAZ file in a secure location.  It’s best to exclude things like login pages or other forms with secrets if they’re not relevant to the issue you’re troubleshooting because exported SAZ file will contain any secrets that were transmitted (including HTTPS transmitted items if HTTPS Decryption is enabled).  Choose a strong password.  You will need to share this password with whomever you wish to open the file.
  7. Securely provide the SAZ file and password to whomever needs it.  Again - this file contains EVERYTHING that was sent on the computer via HTTP (and potentially HTTPS) including potentially passwords, session cookies, and other secrets that can be read in clear text if you have the SAZ password.  It is critically important to care for this file and the password in a secure manner!! (Can you tell that this is important?)
  8. When the other party has the SAZ file on their computer, they can open it by opening Fiddler and clicking File… Load Archive…
  9. If you enabled HTTPS traffic decryption, your computer is still at risk until you perform the following two steps:
    1. Disable HTTPS decryption by opening Tools… Fiddler Options… HTTPS tab, and unchecking Decrypt HTTPS traffic.
    2. Also, click the “Remove Interception Certificates” button to delete the Fiddler certificates.

      See: http://stackoverflow.com/questions/16827901/how-do-you-remove-the-root-ca-certificate-that-fiddler-installs 

Thursday, May 22, 2014

Getting Started with the Modern.ie VMs on VirtualBox with IIS Express

Microsoft has made pre-packaged Virtual Machine images available for purposes of testing different versions of IE via the modern.ie web site.  This blog post describes how you can get these VM images loaded into the free VirtualBox software and talking to an IIS Express web site running on your local PC.

Warning: These instructions were tested on VirtualBox 4.3.12, the latest version as of May 22, 2014.  By default, this is set up to make your VM get Internet access via your PC using a virtual NAT.  The VM will be assigned an IP address by a built-in DHCP server that runs on the VirtualBox Host-Only NIC.  Unless you start changing things, this should all remain on your local PC and your VM won’t accidentally get an IP address from your network’s real DHCP server – but your mileage may vary and you should confirm that!  The author of these instructions makes no warranty that these settings may not cause a problem on your particular network, so you should proceed with caution and at your own risk.  Do not follow these instructions if you are not comfortable with the concepts of IP addresses, network cards, virtualization, or web servers – or what havoc a rogue DHCP server could cause.

  1. Download and install Virtual Box: http://www.virtualbox.org (yes I know it’s Oracle, but it is gratis and it works even on some older Core2 processors that Hyper-V doesn’t support)
    • Be sure that Virtual Box is fully installed before starting the next step because installing the drivers will temporarily disable your physical NIC.
  2. Open your Network control panel and ensure that the “VirtualBox Host-Only Network” adapter is set to use the static IP 192.168.56.1 and the Subnet Mask is 255.255.255.0.  You can leave the DNS server blank.
  3. Download the appropriate VM image from the modern.ie web site
    • Open this page: http://modern.ie/en-us/virtualization-tools#downloads
    • Select "VirtualBox on Windows".
    • Download part1.exe and all of the .rar files for the particular VM you want.  Save these to a temporary "downloads" folder.
      ie8-modern-ie
    • When all of the files are downloaded, run the EXE and allow it to extract the .OVA file.  Once you have the .OVA file, you can then delete the corresponding .EXE and .RAR files.
    • Move the .OVA file to a “special place”.  This is the file you have to run to set up a new VM of that variety.
    • Run the .OVA file by double-clicking on it.  You will get an "import" screen in Virtual Box.
      • Change the "Virtual Disk Image" path to point to the appropriate place where you want the virtual hard drive to live for the VM you’re creating.
      • Check the box to reinitialize the MAC addresses.
      • Click “Import”
        image
  4. When the import is complete, go into the settings for the VM.  On the Network tab, confirm that network Adapter 1 is attached to NAT.  Click OK.
  5. Boot the VM by selecting it and clicking the Start button in the button bar.
  6. Note: the login information on all of the modern.ie VMs is IEUser and Passw0rd!
  7. When the VM boots up, open a command line in the VM and run ipconfig /all
    • The first VM you launch should have an IPv4 address like 10.0.2.15 which was served by the virtual DHCP server on 10.0.2.2.
  8. The next step assumes that you already have a working site in IIS Express.  If not, use Visual Studio or some other tool to create one.
  9. In your “Documents” folder on the host PC, open the IISExpress\config\applicationhost.config file.
    • Find the web site that you want to set up for the VM to use and add an extra entry to the bindings tag to allow IIS Express to listen on the internal VirtualBox IP address (in my example, the site was set up on port 59707 – you can use whatever port you wish) and save the config file:
      image
  10. Start the site.
  11. Try to access the site by visiting http://192.168.56.1:59707 (or whatever port number that you used) – it should hopefully work as expected.

Have fun getting your new site working with old IE!

Wednesday, April 16, 2014

From Zero to Pull Request with Git on Windows

I wrote a guide about using Git and GitHub for "Windows people".

This is the guide I wish was available when I got started with Git last year.

The guide takes you through everything you need to know to get started with Git and GitHub, and only requires a Windows PC where you have admin rights to complete.  You can probably get through the whole thing in an evening or two.

This guide is unique because:

  • It is specifically for developers who use Windows
  • It only describes how to contribute to an existing project
  • It prioritizes the GitHub for Windows UI when the command-line is not required
  • It is not embarrassed to give step-by-step directions or definitions of basic terms
  • It prioritizes "the simple way for the common circumstance" over "every possible way for every possible circumstance"

Check out the guide here:

https://github.com/nycdotnet/FromZeroToPullRequestWindows/blob/master/FromZeroToPullRequestWindows.md

If you like it, please hit me up on Twitter and let me know: @nycdotnet

Tuesday, April 15, 2014

Rounding versus truncating

Be careful when implicitly converting data types in T-SQL.  Directly assigning 1.5 (either as a FLOAT or a NUMERIC) to an INT value in SQL Server may truncate the value to 1 rather than rounding it up as you might expect.  Explicitly calling ROUND( ,0) as part of the assignment will round 1.5 up to 2.

Here's an example:



DECLARE @Value INT;

DECLARE
@NumericValue NUMERIC(10,1) = 1.5;

DECLARE
@FloatValue FLOAT(53) = 1.5;


SET @Value = @NumericValue;  --Not calling ROUND()

SELECT
@Value; --Returns 1


SET @Value = @FloatValue; --Not calling ROUND()

SELECT
@Value; --Returns 1


SET @Value = ROUND(@NumericValue,0);

SELECT
@Value; --Returns 2


SET @Value = ROUND(@FloatValue,0);

SELECT
@Value; --Returns 2



Tuesday, July 23, 2013

Query to Find "Update" Dependencies on a Table or View (SQL Server 2012)


The built-in dependency finder in SQL Server Management Studio does not provide the ability to distinguish between dependencies that are read-only and dependencies that are read-write.  However, SQL Server 2012 does make this information available via the management views.

 

The query below will return all of the objects that reference the table (or view) identified by @SchemaName and @ObjectName where the referencing object can modify the data in the table (or view)... specifically this means an INSERT, UPDATE, or DELETE operation.

 

Note that dependency tracking in SQL Server is not perfect because of the possibility of using dynamic SQL, but this will hopefully be useful information.


 
DECLARE @SchemaName NVARCHAR(128) = 'dbo';
DECLARE @ObjectName NVARCHAR(128) = 'MyTableName';
 
SELECT re.referencing_schema_name, re.referencing_entity_name,
  ref.*
FROM sys.dm_sql_referencing_entities(@SchemaName + '.' + @ObjectName,'OBJECT') re
CROSS APPLY sys.dm_sql_referenced_entities(referencing_schema_name + '.' +
    referencing_entity_name,'OBJECT') ref
WHERE ref.is_updated = 1
  AND ref.referenced_entity_name = @ObjectName
  AND (ref.referenced_schema_name IS NULL OR ref.referenced_schema_name = @SchemaName);

 

This post was edited to fix code formatting and to specify SQL Server 2012 as this sadly does not appear to be supported in SQL Server 2008 or before.

 

References

MSDN: sys.dm_sql_referenced_entities (Transact-SQL)

MSDN sys.dm_sql_referencing_entities (Transact-SQL)

Wednesday, June 5, 2013

How To Make Your DLL Into A NuGet Package for a Private NuGet Feed

This is my reinterpretation of the documentation on NuGet.org, modified to remove the public publishing steps, and to add local publishing steps and a source control reminder.  This guide is intended to help enable the benefits of NuGet for proprietary code such as business-specific DLLs that are not fit for public distribution.  There are plenty of details, options, features, and gotchas with NuGet that are not covered here, but this should hopefully be enough to help you get started.

Step 1 is to set up a private NuGet feed.  You only need to do this once for your organization.

  • You can find out how to do that using the official documentation here: Creating a private NuGet feed
  • You should create a file share that you have at least "Change" rights on that points to the NuGet packages folder on the server where you configure the feed.

Step 2 is to write a cool or useful library (such as a DLL) in Visual Studio.

  • Be sure to fill-in the project properties under Assembly Information, including the version number if you haven't.

Step 3 is to get the NuGet.exe file which is not included by default even if you have NuGet Package Explorer working in Visual Studio.

  • Download NuGet.exe bootstrapper here: http://nuget.codeplex.com/releases/view/58939 .  This is a "bootstrapper only" which means that the first time you run it, it will download the real NuGet.exe program and overwrite itself.  Run it once from a command-line (just NuGet.exe) and you'll have the latest version (should be 600 KB or more).

Step 4 is to set up NuGet in your solution.

  • Create a folder under your solution called "NuGet" and copy NuGet.exe there.
  • Open the Package Manager Console window inside Visual Studio (under View... Other Windows... if you don't see it).
  • At the PM> prompt, type dir and hit Enter.  You should see the files and folders in your solution as well as the new NuGet folder that you just created.
  • At the PM> prompt, type .\NuGet\NuGet.exe   You should see the help for NuGet.exe printed in the console.  If so, PowerShell is working and NuGet is in the correct place in the subfolder under your solution.
  • At the PM> prompt, type .\NuGet\nuget.exe spec MyProjectFolderName\MyProjectName.csproj  This assumes that there is a C# project called MyProjectName in a subfolder under MyProjectFolderName under the current directory.  If so, NuGet will create a .nuspec file under the project subfolder.  If you click the Show All Files Show All Files button in Visual Studio, you should see it in the Solution Explorer.
  • Right-click the file and choose "Include in Project".  With the file selected in the Solution Explorer, hit F4 and ensure the Build Action is "None" and the Copy to Output Directory is set to "Do not copy".
  • Open the file and update all fields that you feel are relevant.  In particular, you should change the package ID field to be the name that you want the project to show as in the NuGet Package Explorer and get rid of any backslashes in the name.  You can remove the children of the dependencies attribute if the package stands alone.
  • Check-in your changes to Source Control!  This should include the .nuspec file, and ideally the path and binary to NuGet.exe (if your organization permits keeping binaries in Source Control).

Step 5 is to package up your project.

  • At the PM> prompt, type .\NuGet\nuget.exe pack MyProjectFolderName\MyProjectName.csproj  This will run NuGet to package up the built project to a .nupkg file.  This is the command that you will run every time you wish to update your NuGet package for publication.
  • You can confirm that this file was successfully created by unzipping it and checking to see if your DLL is inside.

Step 6 is to copy your .nupkg file to the shared folder that you set up in step 1.

  • You can do this with PowerShell in the Package Manager Console via COPY *.nupkg \\MyNuGetServerName\MyNuGetPathName$

Step 7 is to begin using your freshly created NuGet package in your team's other projects!

Friday, May 24, 2013

Performing an INSERT from a PowerShell script

This code demonstrates how to do an INSERT into SQL Server from a PowerShell script using an ADO.NET command object with strongly-typed parameters.  This script issues a DIR *.* command and inserts the results to the database.  While this example is not particularly useful, it is hopefully very simple and understandable to explain the concepts.

In general, parameterized SQL queries (like the below) perform better and are less vulnerable to SQL injection attacks than SQL queries created using string concatenation*.

The script code displayed in the blog post performs several INSERT statements using a row-by-row method.  As a bonus, I have uploaded a version of this script to SkyDrive that uses two flavors of Table-Valued parameters which should give you better performance for big datasets.

 

Download Scripts:

SkyDrive link to Row by Row only Script (has less commentary than this blog post, but code is identical)

SkyDrive link to Row by Row and Table Valued Parameter Script

Note: You have to edit the above files in a text editor to set the SQL Server name and DB name.  Once you edit the file it is no longer considered "remote" so you can run the file as long as the PowerShell ExecutionPolicy is set to RemoteSigned.

 

To try this script, first create a table in a junk database on a test SQL server:

CREATE TABLE FilesInFolder (

       ID INT IDENTITY(1,1) NOT NULL,

       TheFileName NVARCHAR(260) NOT NULL,

       FileLength BIGINT NOT NULL,

       LastModified DATETIME2 NOT NULL,

       PRIMARY KEY (ID)

);

 

Then paste the following code into a .PS1 file (you have to edit the $DBServer and $DBName values first) and run it using Powershell:

 

function Do-FilesInsertRowByRow ([Data.SqlClient.SqlConnection] $OpenSQLConnection) {

 

    $sqlCommand = New-Object System.Data.SqlClient.SqlCommand

    $sqlCommand.Connection = $sqlConnection

 

    # This SQL query will insert 1 row based on the parameters, and then will return the ID

    # field of the row that was inserted.

    $sqlCommand.CommandText = "SET NOCOUNT ON; " +

        "INSERT INTO dbo.FilesInFolder (TheFileName,FileLength,LastModified) " +

        "VALUES (@TheFileName,@FileLength,@LastModified); " +

        "SELECT SCOPE_IDENTITY() as [InsertedID]; "

   

    # I am adding the parameters without values outside the loop.  This means that inside the

    #  loop all I have to do is assign the values and say "run" - much less work than setting

    #  up everything from scratch in each iteration.

    # Also notice the doubled-up (()) - this is how you create a new object and then

    #  immediately pass it as a function parameter in PowerShell.

    # Next, the class names in square brackets are .NET types.  Powershell assumes System.,

    #  so you can omit it.  The double-colon is how you reference enumeration members.

    # Finally, I am piping the output of these calls to Out-Null since otherwise PowerShell

    #  would output the properties of the command object to the console on each call since

    #  there is no assignment made to a variable or another pipeline destination.

    $sqlCommand.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@TheFileName",[Data.SQLDBType]::NVarChar, 260))) | Out-Null

    $sqlCommand.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@FileLength",[Data.SQLDBType]::BigInt))) | Out-Null

    $sqlCommand.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@LastModified",[Data.SQLDBType]::DateTime2))) | Out-Null

   

    # I love how I can foreach over a call like "dir *.*" in PowerShell!!

    foreach ($file in dir *.*) {

        # Here we set the values of the pre-existing parameters based on the $file iterator

        $sqlCommand.Parameters[0].Value = $file.FullName

        $sqlCommand.Parameters[1].Value = $file.Length

        $sqlCommand.Parameters[2].Value = $file.LastWriteTime

 

        # Run the query and get the scope ID back into $InsertedID

        $InsertedID = $sqlCommand.ExecuteScalar()

        # Write to the console.

        "Inserted row ID $InsertedID for file " + $file.Name

    }

 

}

 

 

# Open SQL connection (you have to change these variables)

$DBServer = "MySQLServerName\MyInstanceName"

$DBName = "MyJunkDBName"

$sqlConnection = New-Object System.Data.SqlClient.SqlConnection

$sqlConnection.ConnectionString = "Server=$DBServer;Database=$DBName;Integrated Security=True;"

$sqlConnection.Open()

 

# Quit if the SQL connection didn't open properly.

if ($sqlConnection.State -ne [Data.ConnectionState]::Open) {

    "Connection to DB is not open."

    Exit

}

 

# Call the function that does the inserts.

Do-FilesInsertRowByRow ($sqlConnection)

 

# Close the connection.

if ($sqlConnection.State -eq [Data.ConnectionState]::Open) {

    $sqlConnection.Close()

}

 

 

 

*By "in general", I mean that this is true most of the time, and in the few times when it is not true, the use of parameters is unlikely to leave you worse off than not using parameters.  Plainly, if you are using string concatenation to write SQL statements in any language, you are doing it very wrong and potentially leaving your database open for attack and exploitation.

 

I'm a member of the SkyDrive Insiders program; you can learn more about this program here.  I published a quick-start guide to using SkyDrive at work here.  If you have any questions about SkyDrive, please ask me!