Developer Week 2015

February 17, 2015 on 12:30 pm | In DotNet, Windows | No Comments

DWX2015_Banner_200x120_Speaker_statischThird time in a row I will be speaking at Developer Week 2015 in Nuremberg, Germany. This year I will not do it solo. I’m going with the CODEFUSIONs Head Developer Marcin Słowik and we will be speaking about creating professional style user controls in WPF like the guys at Telerik or Infragistics do it. Please joins as between 15th and 18th of June 2015 in Nuremberg!

Scaling CI–switching poll to push

October 21, 2014 on 10:13 pm | In Continuous Integration, DotNet, SVN, TFS, Windows | No Comments

Scaling CI has many flavors. For example:

When:

  • Code base / test no. increases -> build time increases,
  • Teams grow,
  • No. of projects grows.

Then:

  • Create targeted builds (dev build, qa build),
  • Write fast unit tests,
  • Smaller teams with local integration servers,
  • Modularize the code base:
    • Scale hardware,
    • Add more build agents,
    • Parallelize.

and last but not least:

  • Ease the source control system.

Let me show you how to make Subversion and (TFS) Git pro actively inform Jenkins CI about changes in source control.

The most straight forward way to let the CI server know that something changed in the repository is to configure polling. What it means is that the CI server periodically asks the source control system “do you have changes for me”. In Jenkins CI you are configuring it under “Build Triggers” and “Poll SCM”. Jenkins uses Cron style notation like this:

image

Five stars “* * * * *” means: poll every minute. Ovary minute is as close to continuous as you can get. More often is not possible. Most of the times it is not a problem. Once a minute is quite enough. But what if you have many repositories under CI. The single Jenkins CI requests cost not so much, but if there are many repositories to check it can mean a significant delay.

There is a way to change it. Switching from poll to push. How about letting source control system inform the CI server “I have something new for you”. The mechanism that makes it possible is called hooks (at least its hooks in Subversion and Git). Hooks are scripts that are executed in different situations. On the client before or after commit in (pre-commit, post-commit). Before or after update (pre-update, post-update) and so on. Or on the server before or after receive (pre-commit, post-commit). What is interesting for us are post-commit hook in Subversion (look for hooks subdirectory on the server) or post-receive in Git (look in .git\hooks). Because Git is distributed you have it in every repo but, the one that is interesting for us is of course the repo destined for the CI server, and from its point of view it is the post-receive hooks that needs to be executed. In those hooks you can do basically everything you want. We will get back to it soon.

On the the Jenkins CI side you change to change the trigger to “Trigger build remotely”. This option is only visible if your installation of Jenkins is not secured with long and password.

image

In this case you can always trigger the build by simply calling the URL:

http://[jenkins_server]/jobs/[job_name]/build

If your installation is secured you have to flag the “Trigger build remotely” and you can set the security token for the build. Only with this token the build will be triggered.

image

The URL that needs to be called in this case is

http://[jenkins_server]/jobs/[job_name]/build?token=[token]

If you have the repository viewable without authentication it will be possible to trigger the build. But sometimes the Jenkins CI will be secured that way that nothing is viewable without log in. How to trigger a build in this case? Well there is a plug-in for that. It is called “Build Authorization Token Root Plugin” and it is available under https://wiki.jenkins-ci.org/display/JENKINS/Build+Token+Root+Plugin. In this case the URL will be

http://[jenkins_server]/buildByToken/build?job=[job_name]]&token=[token]

We are ready on the Jenkins CI side. Lets make it ready on the source control system side. Since we are Microsoft minded at CODEFUSION (my company). We have Subversion on our own Windows Server and Git on Microsoft Visual Studio Cloud.

In Subversion go to the server and look for the repositories. Go to repository you want to trigger and to hooks subdirectory. Create a file called post-commit.cmd. Subversion will run this script every time something comes in. We want to simply call an URL. Under Linux you would use the curl command. Here you can do it also but you will have to download the curl for Windows and place it somewhere on the server. But there is a better way. You can use PowerShell do call the URL. So create a post-commit.ps1 file (the name does not matter actually but lets keep it “in ordnung”). Inside write the script:

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$url="https://[jenkins_server]/buildByToken/build?job=[job_name]]&token=[token]"
(New-Object System.Net.WebClient).DownloadString("$url");

The first line is only if you have Jenkins running over SSL with self issued certificate (like we have). In the second line please fill the gaps with to form correct URL. The third line calls this URL. Nice thing about it you have most likely PowerShell installed if you are on modern Windows Server.

Now call the PowerShell script from the post-commit.cmd like this:

PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%~dp0post-commit.ps1'"

The NoProfile and ExecutionPolicy switches are to make it possible to call a script from command line. In Command switch pay attention to the syntax. The %~dp0 switch means current directory (of course).

Now check something in and watch the build being triggered (if it’s not – check it once again – it worked on my machine).

Now Git. We were using TFS Git from visualstudio.com. There is no access to hooks under TFS. But Microsoft was kind enough to make it possible in other way. Log into visualstudio.com. Go to your project and look for “Service Hooks”.

image

It lets you integrate with various 3rd party services. One of them is Jenkins CI.

image

I would like Microsoft to let me make simple URL call among those “Services”. Please. But since it is not possible let’s choose Jenkins.

image

Decided to trigger the build after every code push. You can set the filers to get it triggered only for certain repos or branches. Then choose to trigger generic build and provide all the necessary information like Jenkins URL, user name, API token (more to it later), build (it is job name provided automatically) and build token (as in case of SVN – provided by Jenkins when you configure “Trigger build remotely”). To get the API token on Jenkins CI go to “People”, search for the configured user and choose “Configure”

image

Look for API token and use it on visualstudio.com.

Test it and check it the build was triggered. It should. It worked on my machines.

I hope it was useful!

Adventures with certificates, 2-way-SSL and WCF

September 8, 2011 on 8:43 pm | In DotNet, Windows | No Comments

I’ve recently dived deep into WCF and security. To be exact I’ve tinkered with something that is called 2-Way–SSL a little. It’s a quite complex topic and I will try to summarize what I’ve learned.

1. Issuing certificates

Resources: http://msdn.microsoft.com/en-us/library/ff648902.aspx and http://msdn.microsoft.com/en-us/library/aa386968%28v=vs.85%29.aspx

System: Windows 7

To start you will have to have a certificate. Essentially you have two options if you don’t have one yet:

a) getting one from a CA (Certification Authority),

b) issuing one for yourself.

While choosing the way to go, keep in mind what do you need it for. The whole point of certificates is they need to be trusted. For your development its sufficient you trust yourself, but if you want to sing the SSL communication from your website or sign the emails you send you will need something more. For some purposes a free certificate from issuer like COMODO, CAcert or StartSSL are sufficient. They will only verify that you own the domain and/or email. If you need more thorough verification you will have to pay (from tenths to thousands of dollars).

If you are .NET developer sitting on Windows (sometimes mutually exclusive – cheers to Mono developers), you can use tool called MakeCert. You will find it in Visual Studio Command Prompt. To create usable certificate you will have to act like a CA yourself. So as a fist step you will have to issue a certificate for your CA. You can do it like that:

makecert -n "CN=MkCA" -r MkCA.cer -sv MkCA.pvk

Add a password for your private key (or choose to use none). This command will create a certificate *.cer and a file containing the private key *.pvk. The certificate needs to be added to “Trusted Root Certification Authorities” store. To do so, start the Microsoft Management Console (execute mmc.exe) and from in menu go to File –> Add/Remove Snap-in… and choose Certificates. Press Add > button and choose to manage the certificates in Computer account and on Local computer. Then press Ok.

Navigate to Console root –> Certificates (Local computer) –> Trusted Root Certification Authorities and from the context menu choose All Tasks –> Import…

image

Find the *.cer file you created using MakeCert and add it to the certification store.

Congratulations from now on you trust the certificates issued by the CA you’ve just created. Now you need a certificate combined with private key to secure the communication from the server. To do so you have to issue following commands

makecert -n "CN=localhost" -ic MkCA.cer -iv MkCA.pvk -sv MkServer.pvk MkServer.cer

pvk2pfx -pvk MkServer.pvk -spc MkServer.cer -pfx MkServer.pfx

The fist one will create a private key file and a certificate under your certificate authority MkCA issuer for “localhost” computer. The second one will create *.pfx file that contains both the certificate and the private key (and it can be protected by a symmetric key – a password – to assign one use the –po switch to pvk2pfx). The *.pfx file is necessary to secure the communication. The server will send the certificate (containing among other information’s a server public key) to the client. The client will encrypt a random number using the server public key and send it back to the server. This number will become a symmetric key used by both parties to encrypt the communication.

2. Securing the IIS communication using SSL

Now when you have the the *.pfx ready you can secure the IIS Web Site. To do so do the following:

1. Open IIS Manager (start InetMgr.exe or go to Control Panel –> Administrative Tools –> Internet Information Services (IIS) Manager)

2. Go to server node (root node with the name of your computer) and open “Server Certificates

image

3. From Actions choose Import …

image

4. Pick the *.pfx file you’ve created and enter password (if you used one).

5. Create a new Web Site (or use the default one if you like to secure it) and from Action choose Bindings…

image

5. Choose binding type https and newly imported SSL certificate.

image

Voila the website supports now secure communication.

Lets talk a little about the client (browser) configuration. While in IIS Manager with focus on the newly created web site choose Browse *:443 (https) from the Actions pane (or navigate to the website by typing the URL in the browser address)

image

As you can see there is a problem with the certificate. It’s because the browser tried to verify the issuer of the server certificate and failed. It failed because the issuer is not trusted.

image

As you can remember you’ve added your certificate authority key to the computer storage. But it’s not enough the browser uses the Current User certificate store. So you will have to add the certificate authority once again to user storage. Use the mmc.exe again. Add Certificates snap-in but choose “My user account” this time. Then add the CA certificate as described in section 1.

Note: if the certificate keeps disappearing (why? I don’t know – id did on my machine) from the store please experiment with the storage. Choose Registry or Local Storage as showed below.

image

Restart the browser and the problem report will be gone. The communication is encrypted. The certificate is trusted.

3. 2-Way-SSL

The term “2-Way-SSL” is sometimes used to describe the scenario where both the server and the client need to verify each other. It means that not only the server must present the certificate. The client need to do accordingly.

On IIS it can be achieved by setting the client certificate requirement in the “SSL Settings” of the web site or web application.

image

Set the Require SSL and choose to require client certificates.

image

From now on you will get the HTTP Error 403.7 – Forbidden if you will try to get the resource in the browser.

image

It’s because you don’t have the client certificate ready on the client side. Lets fix it. We will need a *.pfx file one again. Lets create one.

makecert -n "CN=marcin" -ic MkCA.cer -iv MkCA.pvk -sv MkClient.pvk MkClient.cer

pvk2pfx -pvk MkClient.pvk -spc MkClient.cer -pfx MkClient.pfx

With the *.pfx file ready you should add it to Personal –> Certificates in the Local Computer. Now the client will be able to present the client certificate and accomplish the 2-Way-SSL.

4. WCF and 2-Way-SSL

Ressources: http://www.codeproject.com/KB/WCF/wcfcertificates.aspx and http://www.codeproject.com/KB/WCF/WCFSSL.aspx

It’s now time to glue the pieces together. Lets configure WCF service to use the client certificates to communicate with the server.

I have used a simple service that took a string as a parameter and returned it back to the client. I named the service EchoService, I used wsHttpBinding to secure the communication on the transport level (it secures the whole communication as opposed to only securing the messages if you use Mosseage mode). The binding configuration looked like this:

   1: <bindings>

   2:   <wsHttpBinding>

   3:     <binding name="WSHttpBinding_IEchoService">

   4:       <security mode="Transport">

   5:         <transport clientCredentialType="Certificate"></transport>

   6:       </security>

   7:     </binding>

   8:   </wsHttpBinding>

   9: </bindings>

To keep the things simple I’ve turned off the publication of metadata off. The services and behaviors looked like this:

   1: <services>

   2:   <service name="WcfServiceLib.EchoService">

   3:     <endpoint address="" binding="wsHttpBinding"

   4:             bindingConfiguration="WSHttpBinding_IEchoService" 

   5:             contract="WcfServiceLib.IEchoService">

   6:     </endpoint>

   7:   </service>

   8: </services>

   9: <behaviors>

  10:   <serviceBehaviors>

  11:     <behavior>

  12:       <serviceMetadata httpGetEnabled="False"/>

  13:       <serviceDebug includeExceptionDetailInFaults="True" />

  14:     </behavior>

  15:   </serviceBehaviors>

  16: </behaviors>

On the client side of things the configuration needs to be extended with a behaviors section like that one:

   1: <behaviors>

   2:         <endpointBehaviors>

   3:           <behavior name="clientCertificateConf">

   4:             <clientCredentials>

   5:               <clientCertificate findValue="8516165A77364EDA28853CAAAD6197C5158E80A4"

   6:               storeLocation="CurrentUser"

   7:               x509FindType="FindByThumbprint" />

   8:             </clientCredentials>

   9:           </behavior>

  10:         </endpointBehaviors>

  11:       </behaviors>

It tells the client to use the client credentials taken from a CurrenUser certificate store and to search for the certificate using a given thumbprint (findValue). You can find the thumbprint in the certificate details tab.

image

Note if you get a “Invalid hexadecimal string format” from System.IdentityModel you will need to type the thumbprint by hand into the config file instead of copying it into the file. Oh, not ask why. Don’t forget do delete and add the quotation marks too Puszczam oczko

The rest of the client configuration is easy. wsHttpBinding like the one you used on the server plus gathering everything together in client configuration block.

   1: <client>

   2:   <endpoint address="https://localhost/Wcf2WaySsl/Echo.svc" 

   3:             binding="wsHttpBinding"

   4:             bindingConfiguration="WSHttpBinding_IEchoService"

   5:             contract="EchoServiceReference.IEchoService"

   6:             behaviorConfiguration="clientCertificateConf"

   7:           name="WSHttpBinding_IEchoService">

   8:   </endpoint>

   9: </client>

You are done.

You can always configure the client by code like that:

   1: // Configure binding with transport level certificate security

   2: WSHttpBinding binding = new WSHttpBinding();

   3: binding.Security.Mode = SecurityMode.Transport;

   4: binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate;

   5:  

   6: EndpointAddress endptadr = new EndpointAddress("https://localhost/Wcf2WaySsl/Echo.svc");

   7:  

   8: // Configure client by code

   9: using (EchoServiceReference.EchoServiceClient client = new EchoServiceReference.EchoServiceClient(binding, endptadr))

  10: {

  11:     

  12:     // Configure client certificate

  13:     client.ClientCredentials.ClientCertificate.SetCertificate(StoreLocation.CurrentUser,

  14:         StoreName.My, X509FindType.FindByThumbprint,

  15:         "8516165A77364EDA28853CAAAD6197C5158E80A4");

  16:  

  17:     client.Echo("Test");

  18: }

5. Great finale

Keep in mind that the whole process is a bit tricky. It’s easy to make mistake so be careful and check if:

1. the certificate status you use is “This certificate is OK.”,

image

If not check if the CA is among the trusted root certification authorities. Beware of disappearing certificates Puszczam oczko

2. the “Issued To” field of the server certificate matches the name of the server you are using (localhost in my case),

3. If you are testing the SSL configuration using another browser then IE keep in mind that it can use its own certificate store and not the default windows one (Firefox does, Chrome not).

4. you can easily test the services with tools like soapUI. To set it up to use client certificates go to Preferences and set it up like this:

image

I had to create a *.pfx file WITH password to make it work.

Then if the server is configured to transport level security you can simply send a XML request to it to check if everything works fine.

Happy coding!

Selenium RC and FitNesse as a service on Windows Server 2008

November 7, 2009 on 10:19 pm | In Continuous Integration, Windows | 2 Comments

If you are working in a team or running a continuous integration process the most comfortable way to run tools like Selenum RC Server or FitNesse is to install them as a windows service. I was doing this earlier on my old Windows Server 2003 by issuing the  instsrv.exe (to install a service) on srvany.exe (to run anything) – both from Windows Resource Kit. I had to edit the registry to provide what exactly do I wanted to run (java –jar selenium-server.jar or java –jar fitnesse.jar).

But there is no Windows Resource Kit for 2008. You might use the sc.exe and get the old srvany.exe (with compatibility issues according to Microsoft itself). It would work but why bother when there is a Non-Sucking Service Manager! All you have to do to install a service with this tool is to download it, issue a

nssm.exe install SeleniumRC

and edit this dialog box:

image

Click Install service and you are done. Selenium RC Server is installed. All you have to do is to start it. Voila!

How to make CruiseControl.NET accept SSL certificate under Windows Server 2008?

October 24, 2009 on 11:44 pm | In Continuous Integration, Windows | 2 Comments

If you are running CruiseControl.NET under the Local System account and your SVN server certificate was issued by yourself (or by VisualSVN Server) you will quickly run into trouble. Normally if you run any command on your repository you will get this information:

C:\Program Files\svn\bin>svn log https://your_server/svn/your_repository/trunk –username username –password password
Error validating server certificate for ‘https://your_server:443′:
- The certificate is not issued by a trusted authority. Use the
fingerprint to validate the certificate manually!
- The certificate hostname does not match.
Certificate information:
- Hostname: your_server
- Valid: from Sat, 26 Sep 2009 17:24:27 GMT until Tue, 24 Sep 2019 17:24:27 GMT

- Issuer: your_server
- Fingerprint: 24:8e:f6:ba:c7:a6:3f:69:32:c0:21:92:64:44:62:fe:2c:bb:b4:69
(R)eject, accept (t)emporarily or accept (p)ermanently?

If you accept you will not be bothered again. But CCNet works as a Windows Service. There is no one to make the decision. How to deal with this issue. Well earlier it was easy enough. You had to use one of the security holes and start cmd.exe in interactive mode wit at command (look here for more details). But with Windows Server 2008 it is not possible you will simply get this:

C:\Users\Administrator>time
The current time is: 23:31:11.59
Enter the new time:

C:\Users\Administrator>at 22:32 /interactive cmd.exe
Warning: Due to security enhancements, this task will run at the time
expected but not interactively.
Use schtasks.exe utility if interactive task is required (‘schtasks /?’
for details).
Added a new job with job ID = 1

How to deal with this. There is very easy solution. Set the CruiseContril.NET service “Allow to interact with desktop” flag (Start –> Control Panel –> Administrative Tools –> Services –CruiseControl.NET) like this

image

Restart the service and wait a while for this windows to appear:

image

Select show me the message.

Voila! You have command line as Local System user available. You can now issue the

C:\Program Files\svn\bin>svn log https://your_server/svn/your_repository/trunk –username username –password password
command and accept the SSL certificate permanently.

Local Service User Accepting SSL SVN certificate fir CruiseControl.NET server

From this time on you CCNet server will not have any problems with accessing your secured repository.

How to associate a file extension with a given file?

January 12, 2009 on 9:25 pm | In Windows | No Comments

(Windows) You can edit a file type in Explorer (XP: Tools –> Folder options, Vista: Start –> Default Programs). But lets say you want to do it automatically and you have to set an additional parameter to the program you want to start (I have still not figured it out how to define such parameter under Vista). The easiest way is to create a file with *.reg extension that contains this script:

image

Red (cicstarter) – Schell command name

Blue (startf) – Action name

Brown (cicvlm.exe f=\”%1\”) – Application path (parameter inside f=)

Green (.cic) – File extension

You can manipulate the colored parts to feet your needs. Here is the script. Good luck!

Powered by WordPress with Pool theme design by Borja Fernandez.
Text © Marcin Kawalerowicz. Hosting CODEFUSION.