XVIII KKIO Software Engineering Conference

May 26, 2016 on 8:24 pm | In CODEFUSION, Conference | No Comments

imageI will be speaking at the XVIII KKIO Software Engineering Conference in Wrocław, Poland. The conference will take place between 15th and 17th September, 2016. It is a conference “covering all topics of software engineering research and practice”. This years motto is: “Better software = more efficient enterprise: challenges and solutions”. I will be speaking about “Agile Experimentation” (more to it soon). I will also cover the Visual Studio extension we developed at CODEFUSION called NActivitySensor. Let’s meet in Wrocław!

I will be speaking at CIDC 2015 in Orlando, Florida

August 31, 2015 on 1:29 pm | In Clarion, DotNet | No Comments

hostI was invited to give a talk at the CIDC 2015 in Orlando, USA. CIDC is annual Clarion International Developers Conference. I will be speaking about the Microsoft .NET and Clarion integration. The talk is prepared together with the C.I.C. Software GmbH and my company (CODEFUSION). The Clarion part is supervised by Andrzej Skolniak from the C.I.C. We will be giving the speech together. We will be talking about various interoperability solution between .NET and Clarion tried out in one of the projects my company I co-developing with C.I.C. Software. The interoperability solution that is based on unmanaged exports – modified .NET libraries that are accessible form Clarion. The managed methods are not exposed as such. Instead, inverse P/Invoke thunks, automatically created by the common language runtime, are exported. These thunks provide the same marshaling functions as “conventional” P/Invoke thunks, but in the opposite direction. Using this method we were able not only to connect a full blown .NET based BPMN Engine to Clarion but we are able to inject Microsoft WPF based controls to Clarion created windows. With this and a set of callback functions (for .NET to talk back to Clarion – to do the evaluates for example), we built in .NET and C# a production grade extension to Clarion based software.

Join us at the CIDC 2015 between September 30th and October 2nd in Orlando, Florida, USA.

Developer Week 2015

February 17, 2015 on 12:30 pm | In DotNet, Windows | No Comments

DWX2015_Banner_200x120_Speaker_statischThird time in a row I will be speaking at Developer Week 2015 in Nuremberg, Germany. This year I will not do it solo. I’m going with the CODEFUSIONs Head Developer Marcin Słowik and we will be speaking about creating professional style user controls in WPF like the guys at Telerik or Infragistics do it. Please joins as between 15th and 18th of June 2015 in Nuremberg!

Scaling CI–switching poll to push

October 21, 2014 on 10:13 pm | In Continuous Integration, DotNet, SVN, TFS, Windows | No Comments

Scaling CI has many flavors. For example:

When:

  • Code base / test no. increases -> build time increases,
  • Teams grow,
  • No. of projects grows.

Then:

  • Create targeted builds (dev build, qa build),
  • Write fast unit tests,
  • Smaller teams with local integration servers,
  • Modularize the code base:
    • Scale hardware,
    • Add more build agents,
    • Parallelize.

and last but not least:

  • Ease the source control system.

Let me show you how to make Subversion and (TFS) Git pro actively inform Jenkins CI about changes in source control.

The most straight forward way to let the CI server know that something changed in the repository is to configure polling. What it means is that the CI server periodically asks the source control system “do you have changes for me”. In Jenkins CI you are configuring it under “Build Triggers” and “Poll SCM”. Jenkins uses Cron style notation like this:

image

Five stars “* * * * *” means: poll every minute. Ovary minute is as close to continuous as you can get. More often is not possible. Most of the times it is not a problem. Once a minute is quite enough. But what if you have many repositories under CI. The single Jenkins CI requests cost not so much, but if there are many repositories to check it can mean a significant delay.

There is a way to change it. Switching from poll to push. How about letting source control system inform the CI server “I have something new for you”. The mechanism that makes it possible is called hooks (at least its hooks in Subversion and Git). Hooks are scripts that are executed in different situations. On the client before or after commit in (pre-commit, post-commit). Before or after update (pre-update, post-update) and so on. Or on the server before or after receive (pre-commit, post-commit). What is interesting for us are post-commit hook in Subversion (look for hooks subdirectory on the server) or post-receive in Git (look in .git\hooks). Because Git is distributed you have it in every repo but, the one that is interesting for us is of course the repo destined for the CI server, and from its point of view it is the post-receive hooks that needs to be executed. In those hooks you can do basically everything you want. We will get back to it soon.

On the the Jenkins CI side you change to change the trigger to “Trigger build remotely”. This option is only visible if your installation of Jenkins is not secured with long and password.

image

In this case you can always trigger the build by simply calling the URL:

http://[jenkins_server]/jobs/[job_name]/build

If your installation is secured you have to flag the “Trigger build remotely” and you can set the security token for the build. Only with this token the build will be triggered.

image

The URL that needs to be called in this case is

http://[jenkins_server]/jobs/[job_name]/build?token=[token]

If you have the repository viewable without authentication it will be possible to trigger the build. But sometimes the Jenkins CI will be secured that way that nothing is viewable without log in. How to trigger a build in this case? Well there is a plug-in for that. It is called “Build Authorization Token Root Plugin” and it is available under https://wiki.jenkins-ci.org/display/JENKINS/Build+Token+Root+Plugin. In this case the URL will be

http://[jenkins_server]/buildByToken/build?job=[job_name]]&token=[token]

We are ready on the Jenkins CI side. Lets make it ready on the source control system side. Since we are Microsoft minded at CODEFUSION (my company). We have Subversion on our own Windows Server and Git on Microsoft Visual Studio Cloud.

In Subversion go to the server and look for the repositories. Go to repository you want to trigger and to hooks subdirectory. Create a file called post-commit.cmd. Subversion will run this script every time something comes in. We want to simply call an URL. Under Linux you would use the curl command. Here you can do it also but you will have to download the curl for Windows and place it somewhere on the server. But there is a better way. You can use PowerShell do call the URL. So create a post-commit.ps1 file (the name does not matter actually but lets keep it “in ordnung”). Inside write the script:

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$url="https://[jenkins_server]/buildByToken/build?job=[job_name]]&token=[token]"
(New-Object System.Net.WebClient).DownloadString("$url");

The first line is only if you have Jenkins running over SSL with self issued certificate (like we have). In the second line please fill the gaps with to form correct URL. The third line calls this URL. Nice thing about it you have most likely PowerShell installed if you are on modern Windows Server.

Now call the PowerShell script from the post-commit.cmd like this:

PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%~dp0post-commit.ps1'"

The NoProfile and ExecutionPolicy switches are to make it possible to call a script from command line. In Command switch pay attention to the syntax. The %~dp0 switch means current directory (of course).

Now check something in and watch the build being triggered (if it’s not – check it once again – it worked on my machine).

Now Git. We were using TFS Git from visualstudio.com. There is no access to hooks under TFS. But Microsoft was kind enough to make it possible in other way. Log into visualstudio.com. Go to your project and look for “Service Hooks”.

image

It lets you integrate with various 3rd party services. One of them is Jenkins CI.

image

I would like Microsoft to let me make simple URL call among those “Services”. Please. But since it is not possible let’s choose Jenkins.

image

Decided to trigger the build after every code push. You can set the filers to get it triggered only for certain repos or branches. Then choose to trigger generic build and provide all the necessary information like Jenkins URL, user name, API token (more to it later), build (it is job name provided automatically) and build token (as in case of SVN – provided by Jenkins when you configure “Trigger build remotely”). To get the API token on Jenkins CI go to “People”, search for the configured user and choose “Configure”

image

Look for API token and use it on visualstudio.com.

Test it and check it the build was triggered. It should. It worked on my machines.

I hope it was useful!

Vanilla build server and a little NuGet gem

October 6, 2014 on 7:37 pm | In ASP.NET MVC, Continuous Integration, DotNet, MSBuild | No Comments

Vanilla build server is a concept that says that the build server should have as few dependencies as possible. It should be like vanilla ice cream without any raisins (I have raisins in ice cream). Let me cite the classic (from: Continuous Integration in .NET):

“It’s strongly suggested that you dedicate a separate machine to act as the CI server. Why? Because a correctly created CI process should have as few dependencies as possible. This means your machine should be as vanilla as possible. For a .NET setup, it’s best to have only the operating system, the .NET framework, and probably the source control client. Some CI servers also need IIS or SharePoint Services to extend their functionality. We recommend that you not install any additional applications on the build server unless they’re taking part in the build process.”

I was recently preparing a talk for a conference and setting up a brand new CI server on Windows Server 2012. My ASP.NET MVC project build ended up of course with following error:

error MSB4019: The imported project "C:\Program Files 
(x86)\MSBuild\Microsoft\VisualStudio\v11.0\
WebApplications\Microsoft.WebApplication.targets" 
was not found. Confirm that the path in the <Import> 
declaration is correct, and that the file exists on disk.

Well of course. I have a vanilla machine without any MSBuild targets for ASP.NET MVC. I was going to solve it like usual. Create a tools directory, copy the needed targets into the repository and configure the MSBuild paths to take the targets provided with the repository. It worked like a charm in the past and it would work now. But something (call it intuition) made check over at NuGet and to my joy I found this little gem:

https://www.nuget.org/packages/MSBuild.Microsoft.VisualStudio.Web.targets/12.0.1

“MSBuild targets for Web and WebApplications that come with Visual Studio. Useful for build servers that do not have Visual Studio installed.” Exactly!

I quickly installed it. Configured the MSBuild on the build server to use it like this:

/p:VSToolsPath=’..\packages\MSBuild.Microsoft.VisualStudio.Web.targets.12.0.1\tools\VSToolsPath’

It is a command line parameter I’ve added to the build arguments.

An voila!

.NET Developer Days 2014 Conference

September 17, 2014 on 10:13 am | In Continuous Integration, DotNet, Software Engineering | No Comments

2014-09-17_11-53-07I will be speaking at .NET Developer Days 2014 in Wrocław, Poland. The conference will be held between 14th and 16th October 2014 at the City Stadium in Wrocław. The topic is “Continuous integration and deployment in .NET with Jenkins CI and Octopus Deploy”. Here is the conference website: http://developerdays.pl/.

Software decay

September 3, 2014 on 7:47 pm | In Software Engineering | No Comments

At my company CODEFUSION we are working with bigger and bigger customers. We are getting hit by terms that were little known to us until now. Last time we did get a contract to sign with a term (literal translation from Polish) software “illness”. The word illness was in quotation marks. The term was new for me, so I started to dig. It turned out that what was meant here was probably “software decay” (called also software rot, code rot, bit rot, software erosion or software entropy). It was something we all software developers are fighting with. Sometimes without knowing it has a name. Software does not change, bits don’t rot, programs are not getting ill. But the environment in which they are executed often changes. New hardware is being installed, dependent software changes (for example database engine is being updated). So the software slowly deteriorates. Faults are being discovered that were never seen before, performance drops, the overall stability decreases. We see it everywhere: websites are “old” after a year or two, windows applications written for windows without tiles look “funny” in Windows 8. From experience I know environments where some ancient technologies are still used to develop the software because every one it afraid to touch a running system. So: software rots.

It was quite funny how quickly I discovered a decayed piece of software that was written partly by myself. I work with Visual Studio. For various reasons I’m deep into addins development for Visual Studio. To my surprise one of my addins stopped working suddenly. I haven’t done anything with it since a year or so. What I did I’ve done is I installed a new version of Visual Studio and unknown number of updates for both the old and the new one. Maybe it was what rotted my addin. So I quickly spin up the debugger and found that this line of code

   1:  _toolWindow = _applicationObject.Windows.CreateToolWindow(
   2:     _addInInstance,
   3:     _progId,
   4:     _caption,
   5:     _guid,
   6:     ref docObj);

rises an exception. ProgId seems to be wrong. No matter it was all right a year ago. Now it is wrong. Google seemed to know nothing about it. So I fiddled around. _toolWindow is a custom window for my addin. It’s “Windows” type. Visual Studio API is not the cleanest part of the code I’ve seen. Not surprisingly there was a “Window2” type. With a method “CrateToolWindow2” that produces an object of “Window” (of course not “Window2”). But it was exactly what I need. So I’ve changed the implementation slight to:

   1:  Windows2 windows2 = (Windows2)_applicationObject.Windows;
   2:  _toolWindow = windows2.CreateToolWindow2(
   3:     _addInInstance,
   4:     Assembly.GetCallingAssembly().Location,
   5:     _class,
   6:     _caption,
   7:     _guid,
   8:     ref docObj);
   9:                      

And voila! It magically worked. I have no idea why the class name is better than ProgID. Maybe it is an SDK update, maybe something else. But what ta heck! I will let it work until it rots again.

WCF services behind NAT problem and solution

August 28, 2014 on 12:02 pm | In ASP.NET MVC, DotNet | No Comments

Problem: We have a set of WCF services working on a server. We have an ordinary ASP.NET page that calls one of the services to display its state. When we call that page we is supposed to look like this:

image

Green page indicates everything works fine.

At a customers we’ve installed the services.

image

The message was: There was no endpoint listening at http://…/Services/BasicDataService.svc/DeliverServiceState that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.

We quickly discovered that there was nothing wrong with the services. Our app worked fine and we were able to call the “.svc” endpoint in the browser.

But why do we get this exception?

A quick call to the service on the server reviled no response (“This page can’t be displayed”). Ping to the domain reviled “Request timed out.”. Oh, we are getting nearer. Our status page calls the services from the inside of the customers network (both website and services are in the same site on IIS). It looks like we are in NAT. The requests coming to the company router from outside are correctly routed to the server working inside the the company network. The domain name is translated with help of DNS to the global IP address and the company router routes the communication to the server which works with his local IP address. But if we are connecting from inside we are hitting wrong side of the router and it is not able to translate the global IP to the local IP correctly. We are not landing where we supposed to.

Reconfiguration on the company router should do the trick (NAT-Loopback). We informed the company admin to do the change and proceed  with a quick workaround. We changed the hosts file to fix it right away.

Hosts is a text file located in C:\Windows\System32\drivers\etc used by the system to locally route the addresses to IPs. We can add the server local IP address and match it with the domain name.

That solved the problem. But soon we got another one.

We added a second binding to the site and our information page wen red once again. This one was actually easy to solve. We called the ”.svc” service directly and got:

This collection already contains an address with scheme http.  There can be at most one address per scheme in this collection. If your service is being hosted in IIS you can fix the problem by setting ‘system.serviceModel/serviceHostingEnvironment/multipleSiteBindingsEnabled’ to true or specifying ‘system.serviceModel/serviceHostingEnvironment/baseAddressPrefixFilters’.
Parameter name: item

Yep, multiple bindings present but WCF configured to work with only one. We had to change the:

<system.serviceModel>
   <serviceHostingEnvironment multipleSiteBindingsEnabled="false" />

to

<system.serviceModel>
   <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />

in the services web.config.

We added the second address to the hosts file and voila!

Pictures from DWX’14 conference

July 23, 2014 on 8:33 am | In DotNet, Netduino, Software Engineering, Tinkerforge | No Comments

This year I was once again an invited speaker at the Developer Week conference in Nuremberg, Germany. I was speaking (in German of course) about the basics of hardware programming in .NET. Developer Week is biggest developer conference in Germany: 250 session, 150 speakers. It consists of .NET Developer Conference (DDC), Web Developer Conference (WDC) and Mobile Developer Conference (MDC). Here are some pictures from the conference.

Waiting for the first .NET wrist watch

May 9, 2014 on 2:26 pm | In Continuous Integration, DotNet, Netduino | No Comments

Almost a year ago there was a Kickstarter campaign to found a first .NET Micro Framework watch: Agent smartwatch. Nice thing about it is that you will be able to program it using C# and Visual Studio. While we are still waiting for the product there is a SDK with an emulator. It is from the same guys that gave us Netduino! I decided to check it out.

Think about it: you have a Continuous Integration server running your builds and you want to monitor it on the fly. Is there a better device to do it than a wrist watch? So I thought and decided to check it out.

Here is a quick project I’ve hacked to proof the concept. But before we begin let me show you the result:

image

Neat! Isn’t it?

I’m using Jenkins as my Continuous Integration server. It has a set of APIs for the developer to use. I decided to give Json API a try.

I typed:

http://jenkins_url/api/json?tree=jobs[name,lastBuild[building,result]]

What gave me nice Json result:

{

  • "jobs": [
    • {
      • "name": "Demo4Dev1",
      • "lastBuild": {
        • "building": false,
        • "result": "SUCCESS"

        }

      },

    • {
      • "name": "Demo4Dev2",
      • "lastBuild": {
        • "building": false,
        • "result": "SUCCESS"

        }

      },

    • {
      • "name": "DemoTest1",
      • "lastBuild": {
        • "building": false,
        • "result": "SUCCESS"

        }

      }

    ]

}

I went to the Agent website and got the SDK. I fired up Visual Studio and wen New Project –> Visual C# –> Micro Framework –> AGENT Watch Application

image

Which gave me a Hello World application.

I added System.Http and System.IO references and headed straight to get the HTTP response and read the response stream to the end. Like this:

HttpWebRequest req = (HttpWebRequest)WebRequest.Create(JenkinsApiUrl);
WebResponse resp = req.GetResponse();
StreamReader sr = new StreamReader(resp.GetResponseStream());
string respStr = sr.ReadToEnd();

Now I needed something to parse the Json text. Luckily for me I wasn’t the only one. There is nice NuGet project with Json parser. To get it issue:

PM> Install-Package Json.NetMF

Having it I head straight to deserialization:

Hashtable deserializedObject = Json.NETMF.JsonSerializer.DeserializeString(respStr) as Hashtable;

Now I went to hack and slash over over the result to find out everything is all right.

// Assume success
bool generalFailure = false;

foreach (DictionaryEntry de in deserializedObject)
{
    foreach (Hashtable ht in de.Value as ArrayList)
    {
        foreach (DictionaryEntry job in ht)
        {
            if (!job.Key.ToString().Equals("name"))
            {

                Hashtable ht2 = job.Value as Hashtable;
                if (ht2 == null) continue;
                foreach (DictionaryEntry results in ht2)
                {
                    if (!results.Key.ToString().Equals("building"))
                    {
                        if (results.Value.ToString().Equals("FAILURE"))
                            generalFailure = true;
                    }

                }
            }
        }
    }

}

I have added two result images to the resources:

image

And headed to show the result:

// initialize display buffer
_display = new Bitmap(Bitmap.MaxWidth, Bitmap.MaxHeight);

// Show result
_display.Clear();
Font fontNinaB = Resources.GetFont(Resources.FontResources.NinaB);

_display.DrawText("Jenkins", fontNinaB, Color.White, 35, 10);
if (generalFailure)
{
    _display.DrawText("FAIL!", fontNinaB, Color.White, 35, _display.Height - 20);
    Bitmap image =
        new Bitmap(Resources.GetBytes(Resources.BinaryResources.storm), Bitmap.BitmapImageType.Bmp);
    _display.DrawImage(_display.Width / 2 - image.Width / 2,
        _display.Height / 2 - image.Height / 2,
        image, 0, 0, image.Width, image.Height);

}
else
{
    _display.DrawText("SUCCESS!", fontNinaB, Color.White, 35, _display.Height - 20);
    Bitmap image =
        new Bitmap(Resources.GetBytes(Resources.BinaryResources.sun), Bitmap.BitmapImageType.Bmp);
    _display.DrawImage(_display.Width / 2 - image.Width / 2,
        _display.Height / 2 - image.Height / 2,
        image, 0, 0, image.Width, image.Height);
}
_display.Flush();
I packed everything in a never ending while loop with small delay:
while (true)
{
  // ... code ...
  Thread.Sleep(10000);
}

Done!

That’s the screen with the failure notice.

image

I can’t wait to get the Agent Watch to make the final app!

Next Page »

Powered by WordPress with Pool theme design by Borja Fernandez.
Text © Marcin Kawalerowicz. Hosting CODEFUSION.