What is IL – practical example

July 21, 2016 on 1:40 pm | In DotNet | No Comments

I’m sure you know your C# code is not directly compiled into machine code and ran. It is first converted into IL. CIL to be precise. Its stands for Common Intermediate Language. This CIL is then assembled into bytecode and compiled by JIT (Just-in-time compiler) into executable machine code. So much for the theory, but have you ever tried to mess with IL yourself? Not? Lets try it.

Run the Visual Studio Command Prompt from the start menu. Find yourself suitable directory to tinker around. Create a program, compile it and run:

C:\Dev\ilmagic>copy con p.cs
public class p {
  public static void Main() {
    System.Console.WriteLine("Hello console!");
  }
}^Z
        1 file(s) copied.

C:\Dev\ilmagic>csc p.cs
Microsoft (R) Visual C# Compiler version 1.2.0.60317
Copyright (C) Microsoft Corporation. All rights reserved.

C:\Dev\ilmagic>p.exe
Hello console!

Now lets see inside the IL. You can use the ildasm decompiler you have on your machine. It can not only show you the decompiled code but also save it to the file. In order to do it issue the following command:

C:\Dev\ilmagic>ildasm p.exe /OUT=p.il

Now take a look inside the il file with any txt editor you like. Inside its intermediate code. It’s still readable and similar to assembler. Lets mess around. Look for the class definition:

.class public auto ansi beforefieldinit p
       extends [mscorlib]System.Object
{
  .method public hidebysig static void  Main() cil managed
  {
    .entrypoint
    // Code size       13 (0xd)
    .maxstack  8
    IL_0000:  nop
    IL_0001:  ldstr      "Hello CIL!"
    IL_0006:  call       void [mscorlib]System.Console::WriteLine(string)
    IL_000b:  nop
    IL_000c:  ret
  } // end of method p::Main

  .method public hidebysig specialname rtspecialname
          instance void  .ctor() cil managed
  {
    // Code size       8 (0x8)
    .maxstack  8
    IL_0000:  ldarg.0
    IL_0001:  call       instance void [mscorlib]System.Object::.ctor()
    IL_0006:  nop
    IL_0007:  ret
  } // end of method p::.ctor

} // end of class p

Looks vaguely familiar? You can even see the text that will be beamed into console. It is in line:

IL_0001:  ldstr      "Hello console!"

Lets change it to:

IL_0001:  ldstr      "Hello CIL!"

and save the file.

Assemble the IL back to the bytecode with ilasm.exe (it’s a part of .NET SDK – that’s why we are in VS command prompt):

C:\Dev\ilmagic>ilasm p.il /OUTPUT=p2.exe /EXE

Microsoft (R) .NET Framework IL Assembler.  Version 4.6.1038.0
Copyright (c) Microsoft Corporation.  All rights reserved.
Assembling 'p.il'  to EXE --> 'p2.exe'
Source file is ANSI

Assembled method p::Main
Assembled method p::.ctor
Creating PE file

Emitting classes:
Class 1:        p

Emitting fields and methods:
Global
Class 1 Methods: 2;

Emitting events and properties:
Global
Class 1
Writing PE file
Operation completed successfully

Voila and now lets run the exe (that’s the time where the JIT comes into play and creates machine code out of the CIL assembly).

C:\Dev\ilmagic>p2.exe
Hello CIL!

Nice! He?

I will be speaking at CIDC 2015 in Orlando, Florida

August 31, 2015 on 1:29 pm | In Clarion, DotNet | No Comments

hostI was invited to give a talk at the CIDC 2015 in Orlando, USA. CIDC is annual Clarion International Developers Conference. I will be speaking about the Microsoft .NET and Clarion integration. The talk is prepared together with the C.I.C. Software GmbH and my company (CODEFUSION). The Clarion part is supervised by Andrzej Skolniak from the C.I.C. We will be giving the speech together. We will be talking about various interoperability solution between .NET and Clarion tried out in one of the projects my company I co-developing with C.I.C. Software. The interoperability solution that is based on unmanaged exports – modified .NET libraries that are accessible form Clarion. The managed methods are not exposed as such. Instead, inverse P/Invoke thunks, automatically created by the common language runtime, are exported. These thunks provide the same marshaling functions as “conventional” P/Invoke thunks, but in the opposite direction. Using this method we were able not only to connect a full blown .NET based BPMN Engine to Clarion but we are able to inject Microsoft WPF based controls to Clarion created windows. With this and a set of callback functions (for .NET to talk back to Clarion – to do the evaluates for example), we built in .NET and C# a production grade extension to Clarion based software.

Join us at the CIDC 2015 between September 30th and October 2nd in Orlando, Florida, USA.

Developer Week 2015

February 17, 2015 on 12:30 pm | In DotNet, Windows | No Comments

DWX2015_Banner_200x120_Speaker_statischThird time in a row I will be speaking at Developer Week 2015 in Nuremberg, Germany. This year I will not do it solo. I’m going with the CODEFUSIONs Head Developer Marcin Słowik and we will be speaking about creating professional style user controls in WPF like the guys at Telerik or Infragistics do it. Please joins as between 15th and 18th of June 2015 in Nuremberg!

Scaling CI–switching poll to push

October 21, 2014 on 10:13 pm | In Continuous Integration, DotNet, SVN, TFS, Windows | No Comments

Scaling CI has many flavors. For example:

When:

  • Code base / test no. increases -> build time increases,
  • Teams grow,
  • No. of projects grows.

Then:

  • Create targeted builds (dev build, qa build),
  • Write fast unit tests,
  • Smaller teams with local integration servers,
  • Modularize the code base:
    • Scale hardware,
    • Add more build agents,
    • Parallelize.

and last but not least:

  • Ease the source control system.

Let me show you how to make Subversion and (TFS) Git pro actively inform Jenkins CI about changes in source control.

The most straight forward way to let the CI server know that something changed in the repository is to configure polling. What it means is that the CI server periodically asks the source control system “do you have changes for me”. In Jenkins CI you are configuring it under “Build Triggers” and “Poll SCM”. Jenkins uses Cron style notation like this:

image

Five stars “* * * * *” means: poll every minute. Ovary minute is as close to continuous as you can get. More often is not possible. Most of the times it is not a problem. Once a minute is quite enough. But what if you have many repositories under CI. The single Jenkins CI requests cost not so much, but if there are many repositories to check it can mean a significant delay.

There is a way to change it. Switching from poll to push. How about letting source control system inform the CI server “I have something new for you”. The mechanism that makes it possible is called hooks (at least its hooks in Subversion and Git). Hooks are scripts that are executed in different situations. On the client before or after commit in (pre-commit, post-commit). Before or after update (pre-update, post-update) and so on. Or on the server before or after receive (pre-commit, post-commit). What is interesting for us are post-commit hook in Subversion (look for hooks subdirectory on the server) or post-receive in Git (look in .git\hooks). Because Git is distributed you have it in every repo but, the one that is interesting for us is of course the repo destined for the CI server, and from its point of view it is the post-receive hooks that needs to be executed. In those hooks you can do basically everything you want. We will get back to it soon.

On the the Jenkins CI side you change to change the trigger to “Trigger build remotely”. This option is only visible if your installation of Jenkins is not secured with long and password.

image

In this case you can always trigger the build by simply calling the URL:

http://[jenkins_server]/jobs/[job_name]/build

If your installation is secured you have to flag the “Trigger build remotely” and you can set the security token for the build. Only with this token the build will be triggered.

image

The URL that needs to be called in this case is

http://[jenkins_server]/jobs/[job_name]/build?token=[token]

If you have the repository viewable without authentication it will be possible to trigger the build. But sometimes the Jenkins CI will be secured that way that nothing is viewable without log in. How to trigger a build in this case? Well there is a plug-in for that. It is called “Build Authorization Token Root Plugin” and it is available under https://wiki.jenkins-ci.org/display/JENKINS/Build+Token+Root+Plugin. In this case the URL will be

http://[jenkins_server]/buildByToken/build?job=[job_name]]&token=[token]

We are ready on the Jenkins CI side. Lets make it ready on the source control system side. Since we are Microsoft minded at CODEFUSION (my company). We have Subversion on our own Windows Server and Git on Microsoft Visual Studio Cloud.

In Subversion go to the server and look for the repositories. Go to repository you want to trigger and to hooks subdirectory. Create a file called post-commit.cmd. Subversion will run this script every time something comes in. We want to simply call an URL. Under Linux you would use the curl command. Here you can do it also but you will have to download the curl for Windows and place it somewhere on the server. But there is a better way. You can use PowerShell do call the URL. So create a post-commit.ps1 file (the name does not matter actually but lets keep it “in ordnung”). Inside write the script:

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$url="https://[jenkins_server]/buildByToken/build?job=[job_name]]&token=[token]"
(New-Object System.Net.WebClient).DownloadString("$url");

The first line is only if you have Jenkins running over SSL with self issued certificate (like we have). In the second line please fill the gaps with to form correct URL. The third line calls this URL. Nice thing about it you have most likely PowerShell installed if you are on modern Windows Server.

Now call the PowerShell script from the post-commit.cmd like this:

PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%~dp0post-commit.ps1'"

The NoProfile and ExecutionPolicy switches are to make it possible to call a script from command line. In Command switch pay attention to the syntax. The %~dp0 switch means current directory (of course).

Now check something in and watch the build being triggered (if it’s not – check it once again – it worked on my machine).

Now Git. We were using TFS Git from visualstudio.com. There is no access to hooks under TFS. But Microsoft was kind enough to make it possible in other way. Log into visualstudio.com. Go to your project and look for “Service Hooks”.

image

It lets you integrate with various 3rd party services. One of them is Jenkins CI.

image

I would like Microsoft to let me make simple URL call among those “Services”. Please. But since it is not possible let’s choose Jenkins.

image

Decided to trigger the build after every code push. You can set the filers to get it triggered only for certain repos or branches. Then choose to trigger generic build and provide all the necessary information like Jenkins URL, user name, API token (more to it later), build (it is job name provided automatically) and build token (as in case of SVN – provided by Jenkins when you configure “Trigger build remotely”). To get the API token on Jenkins CI go to “People”, search for the configured user and choose “Configure”

image

Look for API token and use it on visualstudio.com.

Test it and check it the build was triggered. It should. It worked on my machines.

I hope it was useful!

Vanilla build server and a little NuGet gem

October 6, 2014 on 7:37 pm | In ASP.NET MVC, Continuous Integration, DotNet, MSBuild | No Comments

Vanilla build server is a concept that says that the build server should have as few dependencies as possible. It should be like vanilla ice cream without any raisins (I have raisins in ice cream). Let me cite the classic (from: Continuous Integration in .NET):

“It’s strongly suggested that you dedicate a separate machine to act as the CI server. Why? Because a correctly created CI process should have as few dependencies as possible. This means your machine should be as vanilla as possible. For a .NET setup, it’s best to have only the operating system, the .NET framework, and probably the source control client. Some CI servers also need IIS or SharePoint Services to extend their functionality. We recommend that you not install any additional applications on the build server unless they’re taking part in the build process.”

I was recently preparing a talk for a conference and setting up a brand new CI server on Windows Server 2012. My ASP.NET MVC project build ended up of course with following error:

error MSB4019: The imported project "C:\Program Files 
(x86)\MSBuild\Microsoft\VisualStudio\v11.0\
WebApplications\Microsoft.WebApplication.targets" 
was not found. Confirm that the path in the <Import> 
declaration is correct, and that the file exists on disk.

Well of course. I have a vanilla machine without any MSBuild targets for ASP.NET MVC. I was going to solve it like usual. Create a tools directory, copy the needed targets into the repository and configure the MSBuild paths to take the targets provided with the repository. It worked like a charm in the past and it would work now. But something (call it intuition) made check over at NuGet and to my joy I found this little gem:

https://www.nuget.org/packages/MSBuild.Microsoft.VisualStudio.Web.targets/12.0.1

“MSBuild targets for Web and WebApplications that come with Visual Studio. Useful for build servers that do not have Visual Studio installed.” Exactly!

I quickly installed it. Configured the MSBuild on the build server to use it like this:

/p:VSToolsPath=’..\packages\MSBuild.Microsoft.VisualStudio.Web.targets.12.0.1\tools\VSToolsPath’

It is a command line parameter I’ve added to the build arguments.

An voila!

.NET Developer Days 2014 Conference

September 17, 2014 on 10:13 am | In Continuous Integration, DotNet, Software Engineering | No Comments

2014-09-17_11-53-07I will be speaking at .NET Developer Days 2014 in Wrocław, Poland. The conference will be held between 14th and 16th October 2014 at the City Stadium in Wrocław. The topic is “Continuous integration and deployment in .NET with Jenkins CI and Octopus Deploy”. Here is the conference website: http://developerdays.pl/.

WCF services behind NAT problem and solution

August 28, 2014 on 12:02 pm | In ASP.NET MVC, DotNet | No Comments

Problem: We have a set of WCF services working on a server. We have an ordinary ASP.NET page that calls one of the services to display its state. When we call that page we is supposed to look like this:

image

Green page indicates everything works fine.

At a customers we’ve installed the services.

image

The message was: There was no endpoint listening at http://…/Services/BasicDataService.svc/DeliverServiceState that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.

We quickly discovered that there was nothing wrong with the services. Our app worked fine and we were able to call the “.svc” endpoint in the browser.

But why do we get this exception?

A quick call to the service on the server reviled no response (“This page can’t be displayed”). Ping to the domain reviled “Request timed out.”. Oh, we are getting nearer. Our status page calls the services from the inside of the customers network (both website and services are in the same site on IIS). It looks like we are in NAT. The requests coming to the company router from outside are correctly routed to the server working inside the the company network. The domain name is translated with help of DNS to the global IP address and the company router routes the communication to the server which works with his local IP address. But if we are connecting from inside we are hitting wrong side of the router and it is not able to translate the global IP to the local IP correctly. We are not landing where we supposed to.

Reconfiguration on the company router should do the trick (NAT-Loopback). We informed the company admin to do the change and proceed  with a quick workaround. We changed the hosts file to fix it right away.

Hosts is a text file located in C:\Windows\System32\drivers\etc used by the system to locally route the addresses to IPs. We can add the server local IP address and match it with the domain name.

That solved the problem. But soon we got another one.

We added a second binding to the site and our information page wen red once again. This one was actually easy to solve. We called the ”.svc” service directly and got:

This collection already contains an address with scheme http.  There can be at most one address per scheme in this collection. If your service is being hosted in IIS you can fix the problem by setting ‘system.serviceModel/serviceHostingEnvironment/multipleSiteBindingsEnabled’ to true or specifying ‘system.serviceModel/serviceHostingEnvironment/baseAddressPrefixFilters’.
Parameter name: item

Yep, multiple bindings present but WCF configured to work with only one. We had to change the:

<system.serviceModel>
   <serviceHostingEnvironment multipleSiteBindingsEnabled="false" />

to

<system.serviceModel>
   <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />

in the services web.config.

We added the second address to the hosts file and voila!

Pictures from DWX’14 conference

July 23, 2014 on 8:33 am | In DotNet, Netduino, Software Engineering, Tinkerforge | No Comments

This year I was once again an invited speaker at the Developer Week conference in Nuremberg, Germany. I was speaking (in German of course) about the basics of hardware programming in .NET. Developer Week is biggest developer conference in Germany: 250 session, 150 speakers. It consists of .NET Developer Conference (DDC), Web Developer Conference (WDC) and Mobile Developer Conference (MDC). Here are some pictures from the conference.

Waiting for the first .NET wrist watch

May 9, 2014 on 2:26 pm | In Continuous Integration, DotNet, Netduino | No Comments

Almost a year ago there was a Kickstarter campaign to found a first .NET Micro Framework watch: Agent smartwatch. Nice thing about it is that you will be able to program it using C# and Visual Studio. While we are still waiting for the product there is a SDK with an emulator. It is from the same guys that gave us Netduino! I decided to check it out.

Think about it: you have a Continuous Integration server running your builds and you want to monitor it on the fly. Is there a better device to do it than a wrist watch? So I thought and decided to check it out.

Here is a quick project I’ve hacked to proof the concept. But before we begin let me show you the result:

image

Neat! Isn’t it?

I’m using Jenkins as my Continuous Integration server. It has a set of APIs for the developer to use. I decided to give Json API a try.

I typed:

http://jenkins_url/api/json?tree=jobs[name,lastBuild[building,result]]

What gave me nice Json result:

{

  • "jobs": [
    • {
      • "name": "Demo4Dev1",
      • "lastBuild": {
        • "building": false,
        • "result": "SUCCESS"

        }

      },

    • {
      • "name": "Demo4Dev2",
      • "lastBuild": {
        • "building": false,
        • "result": "SUCCESS"

        }

      },

    • {
      • "name": "DemoTest1",
      • "lastBuild": {
        • "building": false,
        • "result": "SUCCESS"

        }

      }

    ]

}

I went to the Agent website and got the SDK. I fired up Visual Studio and wen New Project –> Visual C# –> Micro Framework –> AGENT Watch Application

image

Which gave me a Hello World application.

I added System.Http and System.IO references and headed straight to get the HTTP response and read the response stream to the end. Like this:

HttpWebRequest req = (HttpWebRequest)WebRequest.Create(JenkinsApiUrl);
WebResponse resp = req.GetResponse();
StreamReader sr = new StreamReader(resp.GetResponseStream());
string respStr = sr.ReadToEnd();

Now I needed something to parse the Json text. Luckily for me I wasn’t the only one. There is nice NuGet project with Json parser. To get it issue:

PM> Install-Package Json.NetMF

Having it I head straight to deserialization:

Hashtable deserializedObject = Json.NETMF.JsonSerializer.DeserializeString(respStr) as Hashtable;

Now I went to hack and slash over over the result to find out everything is all right.

// Assume success
bool generalFailure = false;

foreach (DictionaryEntry de in deserializedObject)
{
    foreach (Hashtable ht in de.Value as ArrayList)
    {
        foreach (DictionaryEntry job in ht)
        {
            if (!job.Key.ToString().Equals("name"))
            {

                Hashtable ht2 = job.Value as Hashtable;
                if (ht2 == null) continue;
                foreach (DictionaryEntry results in ht2)
                {
                    if (!results.Key.ToString().Equals("building"))
                    {
                        if (results.Value.ToString().Equals("FAILURE"))
                            generalFailure = true;
                    }

                }
            }
        }
    }

}

I have added two result images to the resources:

image

And headed to show the result:

// initialize display buffer
_display = new Bitmap(Bitmap.MaxWidth, Bitmap.MaxHeight);

// Show result
_display.Clear();
Font fontNinaB = Resources.GetFont(Resources.FontResources.NinaB);

_display.DrawText("Jenkins", fontNinaB, Color.White, 35, 10);
if (generalFailure)
{
    _display.DrawText("FAIL!", fontNinaB, Color.White, 35, _display.Height - 20);
    Bitmap image =
        new Bitmap(Resources.GetBytes(Resources.BinaryResources.storm), Bitmap.BitmapImageType.Bmp);
    _display.DrawImage(_display.Width / 2 - image.Width / 2,
        _display.Height / 2 - image.Height / 2,
        image, 0, 0, image.Width, image.Height);

}
else
{
    _display.DrawText("SUCCESS!", fontNinaB, Color.White, 35, _display.Height - 20);
    Bitmap image =
        new Bitmap(Resources.GetBytes(Resources.BinaryResources.sun), Bitmap.BitmapImageType.Bmp);
    _display.DrawImage(_display.Width / 2 - image.Width / 2,
        _display.Height / 2 - image.Height / 2,
        image, 0, 0, image.Width, image.Height);
}
_display.Flush();
I packed everything in a never ending while loop with small delay:
while (true)
{
  // ... code ...
  Thread.Sleep(10000);
}

Done!

That’s the screen with the failure notice.

image

I can’t wait to get the Agent Watch to make the final app!

Hardware programming in .NET at DWX 2014

April 18, 2014 on 2:12 pm | In DotNet, Netduino, Tinkerforge | No Comments

Once again I was invited to give a talk at the DWX – Developer Week in Nuremberg, Germany.  Last year I was speaking about “Continuous Integration in .NET”. This year it is a time to give “Hardware programming in .NET” a try. I will show how to create software for Netduino, Tinkerforge and Raspberry Pi using .NET Micro Framework, .NET Framework and Mono. Oh, and I’m planning to build the circuits the talk! It should be a lot of fun. And here a small example of RGB LED attached to Raspberry Pi and programmed in Mono.

 

CODEFUSION’s Illuminated RaspberryPi

 

To get mono to your Raspberry Pi issue following commands:

sudo apt-get update

sudo apt-get isntall mono-runtime

sudo apt-get install mono-mcs

And here is the source code for the blinking LED:

using RaspberryPiDotNet;

namespace HelloRaspberryPi
{
    class Program
    {
        static void Main(string[] args)
        {
            GPIOMem ledRed = new GPIOMem(GPIOPins.Pin_P1_22);
            GPIOMem ledGreen = new GPIOMem(GPIOPins.Pin_P1_18);
            GPIOMem ledBlue = new GPIOMem(GPIOPins.Pin_P1_16);

            while (true)
            {
                ledRed.Write(true);
                System.Threading.Thread.Sleep(1500);
                ledRed.Write(false);
                ledGreen.Write(true);
                System.Threading.Thread.Sleep(1500);
                ledGreen.Write(false);
                ledBlue.Write(true);
                System.Threading.Thread.Sleep(1500);
                ledBlue.Write(false);
            }
        }
    }
}

Prototype is showed showed on the following picture.

2014-04-17 17.01.59

The image below shows the used Raspberry Pi pins used. For Description and matching PIN names to numbers see here http://elinux.org/RPi_Low-level_peripherals. I have used the GPIOPins.Pin_P1_22 = GPIO25 for red, GPIOPins.Pin_P1_18 = GPIO24 for green and GPIOPins.Pin_P1_16 = GPIO23 for blue. Plus the P1-20 for the grounding. I was using 3 220 Ohm resistors connected the the colors (+).

image

To get it running you will have to reference and use the RaspberryPiDotNet.dll from https://github.com/cypherkey/RaspberryPi.Net/ and have the BCM2835 library compiled. You can get install it using following commands:

wget http://www.airspayce.com/mikem/bcm2835/bcm2835-1.3.tar.gz

tar -zxvf bcm2835-1.3.tar.gz

cd bcm2835-1.3.tar.gz

./configure

make

sudo make install

cd src

cc -shared bcm2835.o -o libbcm2835.so

And compile your program using Mono.

sudo msc ./Program.cs –r: ./RaspberryPiDotNet.dll

and to run it

sudo mono ./Program.exe

Voila!

If you happen to attend the Developer Week / .NET Developer in conference between 14th and 17th July 2014 please drop by to see my talk!

PS. My mono is enclosed in a self printed case (using our 3D printer) with my company CODEFUSION logo.

Next Page »

Powered by WordPress with Pool theme design by Borja Fernandez.
Text © Marcin Kawalerowicz. Hosting CODEFUSION.