As a developer I have need for a wide variety of software tools. While I normally install these on an individual basis it can be useful to install multiple utilities at once, especially when I’m rebuilding my development environment. For this https://www.ninite.com/ can be extremely useful.
ninite.com let’s you choose one or more tools to be installed, such as winRar, Putty, Visual Studio Code, Filezilla FTP, Java Runtime, etc. You can then download a single install package that installs them all in one go.
I work a lot in the airport Common Use (CUTE / CUPPS / CUSS) world. As such, one of the applications I work with a lot is ARINC’s PCP / PCPNET (now Rockwell Collins). For the past several years I’ve had this service running in my local dev environment to support my own product development built on top of ARINC’s platform (along with several other CUTE / CUPPS providers) and it has “just worked”. As recently as last night I was doing development in my home office and started my local ARINC PCPNET environment with no problems.
This afternoon I tried to fire up my ARINC PCPNET and I noticed the service took a really long time to start and I couldn’t connect to my ATB / BTP printers. Upon investigating the error logs in C:\Logs\Pcpnet\PcpNetCom.Log I noticed a new error:
Problem occurred while listening on port 50005 ... An attempt was made to access a socket in a way forbidden by its access permissions
“Well that’s new”, I thought to myself. My first instinct was that somehow some the ports that I use for PCPNET were being used by another application. So the first thing I did was check for ports in use:
netstat -aon | find "50005"
But that showed that the port was not being used.
Ok. So what did I change this morning that might have affected things. Well this morning I installed Docker for Windows. At first that didn’t seem like it could cause any problems, but doing a bit of digging around led me to the following command:
netsh int ipv4 show excludedportrange protocol=tcp
Which gave me a listing like the following:
That’s interesting. A series of port ranges have been reserved, of which my required ports (ARINC defaults to 50001 and up (typically a maximum of 6 – 10 ports for peripherals) fall right in the middle. And better yet, after each reboot, the port ranges above 49000 all seem to “shift” up or down by 20 or so.
To be fair, this is not really a docker issue. My investigation has pointed back to Hyper-V. Docker for Windows requires Hyper-V be installed before you install Docker for Windows. During my investigation I uninstalled Docker, then removed Hyper-V and the port reservations above 49000 all went away. I then re-enabled Hyper-V on my Win10 Pro machine and the 49000+ port reservations all came back, even before I reinstalled Docker.
So how do we solve this? In theory we should be able to delete a port reservation range with something like the following:
netsh int ipv4 delete excludedportrange protocol=tcp startport=50000 numberofports=50
and then add our own port reservation range with something like the following:
netsh int ipv4 add excludedportrange protocol=tcp startport=50000 numberofports=50
Unfortunately this didn’t work, returning an error indicating we couldn’t make the change.
In the end, we have two possible solutions (there may be others, but this is what I found in my limited time to debug):
Change the port that ARINC PCPNET is using. This can be done in the registry by changing the “Client IP Port” entry for the devices to a value not in the reserved ranges. Restarting the PCP32 or PCPNET services should then use the new ports.
Uninstall Docker and Hyper-V (via the Windows Features tool), reboot, reserve our range using the command listed above, then re-installing Hyper-V and Docker. After doing this Hyper-V is smart enough to start it’s own reserved ranges “around” our pre-reserved ranges (notice in the image below the range from 50000 – 500049 is protected:
PCPNet can now happily start with it’s preferred port range.
Installing MSSQL 2017 Developer Edition on my relatively clean development VM today I was getting an error part way through the installation:
Error installing Microsoft Visual C++ 2015 Redistributable
VS Shell installation has failed with exit code 1638.
The installer then continued, but at the end I was told that the Database Engine and a couple of other components were not installed correctly.
I tried downloading a newer MSSQL installer, but that still didn’t work.
I tried installing the Microsoft Visual C++ 2013 Redistributable (per several suggestions from others). That made no difference.
In the end it boiled down to the fact that I already had Visual Studio 2017 installed and that had installed the Microsoft Visual C++ 2017 Redistributable. I ended up having to remove the Microsoft Visual C++ 2017 Redistributable component using Windows Add/Remove Programs (being sure to uninstall both the x86 and x64 versions).
After a reboot for good measure, the MSSQL 2017 installer ran fine.
I then went back and re-installed the Microsoft Visual C++ 2017 Redistributable x86 and x64 components using the following links:
I recently received this error when trying to commit a local GIT repository in SourceTree:
*** Please tell me who you are.
Run
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
to set your account's default identity.
Omit --global to set the identity only in this repository.
fatal: unable to auto-detect email address (got 'username@MACHINENAME.(none)')
This was weird as I had already used the indicated commands to set my commit options. Running “git config –global -l” in a Git Bash shell (I am running on Windows 10) resulted in the following:
After a bit of searching online, I discovered one possible problem was my email address. Notice the quotation marks around the email address. When I configured my email address I used the following:
NUnit has been my favorite unit test framework for years. I find it has the features that work best for me.
I recently upgraded to NUnit 3.2. The latest version of NUnit can be downloaded from here.
NUnit has always had several methods to run the tests you write. Probably the most popular way to run the tests, at least as a developer in the middle of working on a particular feature, has been the GUI runner. It provides a quick visual mechanism of seeing which of your tests have failed and the reasons why. After installing NUnit 3.2 I went looking for the GUI Runner but couldn’t find it. After a bit more digging I found a post by the developers indicating that they had split the GUI runner development from the framework development and that the GUI was still months away from completion.
In short, the “official” way to run unit tests now in NUnit 3 is to use various test runners. Until the GUI Runner makes a comeback, we’re going to configure Visual Studio’s test runner to run our tests. It’s pretty easy.
Open Visual Studio (I’m not going to go into how to use NUnit in this post. NUnit has some really great and easy to follow documentation.) No need to open or create a project. Click on the Tools menu, then Extensions and Updates. Select “Online” then in the search bar type “NUnit Test Adapter”. Make sure you pick the NUnit3 Test Adapter. Click on it then install. You will likely be asked to restart Visual Studio.
Now when you run your test project you can open the Test Explorer (Test Menu, Windows, Test Explorer). If you haven’t built your test project lately do so now. Once you’ve built your test project you will see all of your tests listed in the Test Explorer. With the “Group By” drop down in the upper left corner of the Test Explorer window you can change how your tests are grouped. Setting it to “Class” will group your tests similarly to the default grouping in the NUnit GUI Runner, grouping them by the class the tests are contained in. You can also right-click in the Test Explorer window to change the grouping option.
You will notice that the tests are not grouped like they are in the NUnit GUI Runner. There are ways of doing it in Visual Studio Test Explorer, but I won’t cover that here. Without the grouping it can be a bit difficult to run a subset of your tests, useful if you are working on a particular feature and just want to run the tests applicable to that feature. A nice feature of the Visual Studio Test integration that partially solves this problem is the ability to run your tests by scope. You can right click within a function, within a class our outside of a class. This will provide the option to “Run Tests” in the context menu. Clicking this will run all tests in the clicked scope. It will run the single test function, all tests in the selected class or all tests in the namespace, respectively.
Another nice feature of the test runner is the visual cues on your tests. In the image to the right you can see the 3 versions of the icons representing an un-run test, a successful test and a failed test. Clicking on any of the icons lets you run the test directly and even debug the code without starting the whole project.
UPDATE: There has been some progress on the GUI Runner though you have to download the code and run it yourself if you want to use it. It is on the NUnit Github site here.
You can use a relative path or leave the path out completely in which case the log will be written to the current directory. This will depend on how your hosting your service. For this reason it is often easier to just specify a full path.
Now when you run your application you will see a file created. If you double click on the file, or otherwise launch SvcTraceViewer.exe you will get lots of good information about your service.
If you are getting an exception, the trace should have some entries highlighted in red. Examining those entries more closely will provide details on the underlying exception. In my example below we see that I have a serialization error (the exception items are selected so they are not showing as red in the screenshot):
The WCF tracing is built on top of System.Diagnostics. WCF is not the only system that provides trace information. Trace Logging is highly configurable. You can set different log levels for different types of information. You can set different targets. You can consume traces in different ways as well.
This MSDN article provides a good starting point for customizing your trace logging.
The most common setting you will likely use though is the Trace Level. This will let you make the log extremely verbose if trying to track down a difficult bug or performance issue, or you can set the log to a higher level so the log doesn’t grow so large but you still get critical errors.
From the looks of things, attackers were able to get some non-password-related data, such as account email addresses, password reminders and salt values. They are saying that the encrypted vault data (where actual encrypted passwords are kept) was not taken. While this is certainly not “nothing”, it doesn’t seem to be terribly bad either.
They have a list of suggestions in the notice that are really just a good idea to do from time to time in any case:
Change your master password. They will be asking everyone to do it (unless you have 2-factor authentication enabled)
Enable two-factor authentication. I’ve had this turned on now for a few months using the free Google Authenticator app. It’s a little bit of a pain when you’re in a hurry, but really it’s a very easy solution and it significantly increases the security on your data. If you’re really security conscious try using the Yubikey hardware token!
Change the password on any site where you might have re-used your master password. This is a bad idea anyway, so go do it now (and don’t reuse your new master password).
I’ve seen a lot of posts about how stupid it is to store all your password data in a centralized location. But really, I couldn’t disagree more. LastPass (and several other password management sites) have been audited, investigated, and even had portions of their code released as open-source for review and no one has found any problems with them, including some very big names in security and encryption:
In addition, storing passwords is what these guys do. It makes more sense to rely on experts to do this for you then for you to roll your own solution. They have the expertise to do it right (even when their is a breach, their layered defenses make it virtually a non-issue), they have the tools to detect breaches quickly and hence rapidly mitigate the damage, and they have the reputation and professionalism to let their customers know that something happened and what they are doing to fix it. Trying to roll your own solution is like trying to write your own database engine because you can do it so much better then all those “other guys” out there. You are deluding yourself.
Along these lines. Bruce Schneier has some good suggestions on choosing your next secure [Master] password:
Monday mornings are rough. It’s always hard to drag yourself into work after a few days away. But at least the sun was shining and the birds were singing.
Unfortunately my idealistic Monday morning was rudely interrupted by Visual Studio giving me this lovely dialog:
“Ok, no problem”, I thought to myself. “I’ll just restart Visual Studio”. A few hiccups once in a while is not unusual. However after my 4th restart attempt including a reboot in the middle and still getting the error I was getting worried. Was I looking at a long day of repairing/reinstalling Visual Studio?
I figured I’d do a quick search online, though I couldn’t imagine I would find anything useful under “Visual Studio Crash Startup”. I was wrong.
I came across several mentions of GitExtensions causing problems. Specifically the GitExtensions Toolbar within Visual Studio. Apparently it REALLY doesn’t like being hidden. Fortunately I had already had my coffee this Monday morning and the gears were turning in my head (had this happened just one hour earlier we may have had a very different outcome). I remembered that on Friday I had done exactly this, I had hidden the GitExtensions Toolbar in Visual Studio. I love Git and use it extensively for my personal and professional side projects. But this VM I am running on is used only for my day job and we don’t use Git. So I had figured I would clean up my environment a bit. Little did I know the tripwire I had just hit.
1. Start Visual Studio in safe mode and unhide the toolbar. You can do this by starting Visual Studio from the command line with the appropriate command line arguments:
DevEnv.exe /safemode
2. Use Control/Panel – Programs and Features to change your Git Extensions installation and remove the Visual Studio plugin (this is what I did).
3. Update to the latest version of Git Extensions. This issue has been resolved.
I recently updated the StructureMap NuGet package in one of my projects to the latest version (3.1.5.154). When I did this I was surprised when my code stopped compiling. The code in question was code that specified which constructor to call on one of my dependencies. Here is the old code that used to work just fine:
x.SelectConstructor<SessionGeneratorDefault>(() => new SessionGeneratorDefault((SecurityValues)null, false));
x.For<ISessionGenerator>().Use<SessionGeneratorDefault>()
.Ctor<SecurityValues>("secValues").Is(Utilities._buildSecurityValues())
.Ctor<bool>("forceLocalMode").Is(AppState._instance.ForceLocalMode);
After I updated the NuGet package I started getting a compile error:
StructureMap.ConfigurationExpression’ does not contain a definition for ‘SelectConstructor’ and no extension method ‘SelectConstructor’ accepting a first argument of type ‘StructureMap.ConfigurationExpression’ could be found (are you missing a using directive or an assembly reference?)
If you go an look at the StructureMap GitHub page for constructor selection it shows that the format has changed:
x.ForConcreteType<SessionGeneratorDefault>().Configure.SelectConstructor(() => new SessionGeneratorDefault((SecurityValues)null, false));
x.For<ISessionGenerator>().Use<SessionGeneratorDefault>()
.Ctor<SecurityValues>("secValues").Is(Utilities._buildSecurityValues())
.Ctor<bool>("forceLocalMode").Is(AppState._instance.ForceLocalMode);
Imagine my surprise when the code doesn’t work! Oh, it compiled just fine, but at runtime it uses the incorrect constructor.
After a bit of trial and error I stumbled upon the fact that the “SelectConstructor” method is still available, it has just been moved in the object model. So now the final code I came up with that works just fine is as follows (notice that it is all in one statement now as well instead of two):
x.For<ISessionGenerator>().Use<SessionGeneratorDefault>()
.SelectConstructor(() => new SessionGeneratorDefault((SecurityValues)null, false))
.Ctor<SecurityValues>().Is(Utilities._buildSecurityValues())
.Ctor<bool>().Is(AppState._instance.ForceLocalMode)
;
*NOTE I removed the parameter names from the .Ctor methods (“secValues” and “forceLocalMode”) as they were not needed as the constructor did not have multiple parameters with the same Type. But they could be added for clarity if desired.