Performance decrease since pre production samples?

Now I finished reading the book Intel Xeon Phi Coprocessor – High-Performance Programming from Jim Jeffers and James Reinders. I hoped to find the reason why I am not able to reach good performance results on Xeon Phi, but I am still a bit confused about that. So I decided to test the two example programs from the book and to look if the given results are comparable to the performance on our Xeon Phi. Since the source code and the output is printed completely in the book, the calculation times should be nearly the same because our model of Phi has also 61 cores like the pre production sample in the book. So lets have a look at the two programs.

9 point stencil algorithm
This small program applies a blur filter to a given image represented as 2D array. The influence of all 8 neighborpoints to a center point is taken into account. So for each point a weighted sum of 9 addends must be calculated. Since their are two image buffers which are swapped at the end of each iteration, every pixel can be calculated independently from the others. Thats’s why a simple parallelization can be realized by:

#pragma omp parallel for private(x)
for(y=1; y < HEIGHT-1; y++) {
    for(x=1; x < WIDTH-1; x++) {
        ....
    }
}

To help the compiler with the vectorization it is only necessary to add a #pragma ivdep. So the compiler vectorizes the inner loop.

#pragma omp parallel for private(x)
for(y=1; y < HEIGHT-1; y++) {
    #pragma ivdep
    for(x=1; x < WIDTH-1; x++) {
        ....
    }
}

After this code changes the authors reach the following execution time on the Xeon Phi:

  • 122 threads:  8,772s
  • 244 threads: 12,696s

The program compiled here with the same flags and the same setup for our Phi (scatter scheduling) leeds to:

  • 122 threads: 12,664s
  • 244 threads: 19,998s
  • (240 threads: 17,181s)

So in the case of 122 threads our Phi needs 44% more time to finish its work. In the case of 244 threads the increase is even 57%! The special behaviour of using the maximal number of threads will be investigated below. But even 240 threads are much slower than the reference in the book (35% difference).

Diffusion
Here a program is examined which simulates the diffusion of a solute through a volume of liquid over time. This happens in 3D space. The calculcation is very simular to the image filter example from above with the main difference of a 3D array now. Here you take six neighboring grid cells into account (above, below, in front, behind, left and right). So you have for every entry a weighed sum with seven addends. After optimizing for scaling and vectorizing your code looks like:

#pragma omp parallel for
{
....
     #pragma omp for collapse(2)
    for(z=0; z < nz; z++) {
        for(y=0; y < ny; y++) {
        #pragma ivdep
            for(x=0; x < nx; x++) {
            ....
            }
        }
    }
}

The results in the book are:

  • 122 threads: 25,369s
  • 244 threads: 18,664s

With our Phi I am able to achieve the following times:

  • 122 threads: 22,661s
  • 244 threads: 29,849s
  • (240 threads): 20,419s

For me it was very strange to notice, that the execution times especially for 240 threads have big variability. The fastest run within 10 was finished after 20,419s and the slowest one needed 31,580s, although I was the only user on Phi. In contrast for 122 threads the fastest execution finished after 22,661s, the slowest one after 23,796s. For 244 threads the behaviour of the Phi ist again completely different from the result of the book. And if one looks at the output of Phi’s monitoring software one can see the reason for it:

240 Threads

240 Threads

244 Threads

244 Threads

So the average core utilization decreases dramatically if you follow the recommendation of the book to use all available cores in native mode and all-1 when running in offload mode. Perhaps a change in the software leeds to this behaviour? I also measured the fastest execution time on the two server processors on the board (which is not done in the book). For 16 threads they needed 30,900 seconds and so they took “only” 50% more time than the Xeon Phi. And this in an application which should be capable of using all compute power which the Xeon Phi offers.

Summary
Strange. That’s all I have in mind when I am thinking about this situation. I am using the same code like the book, the same compiler flags and a Phi product with the same featureset as the pre production sample in the book. I’m running the code in native mode so that driver, mpss and so on can’t have an impact on the performance. The only things I can see that differs from the book is the linux-version on the Phi (latest available one) and the new versions of Intel compiler and OpenMP library. But can this cause such big performance differences?

First real life experiment with the Xeon Phi IV

In this part I will present some results, which I got from using OpenCL on the Xeon Phi. In another blog entry I described some problems when using the Xeon Phi together with OpenCL [1]. This problems were solved now and I will first report, how I reached this. After that I will summarize the performace measurements and their results.

Getting OpenCL running
My problem was not to install OpenCL but to initialize it correctly. So everything I explained in [1] was correct and the installation was complete. First I used an example code from the Internet, to make sure, that the Phi is really registered as OpenCL device [2]. This created me the following output:

platform count: 1
device count: 2
1. Device: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
1.1 Hardware version: OpenCL 1.2 (Build 56860)
1.2 Software version: 1.2
1.3 OpenCL C version: OpenCL C 1.2
1.4 Parallel compute units: 32
2. Device: Intel(R) Many Integrated Core Acceleration Card
2.1 Hardware version: OpenCL 1.2
2.2 Software version: 1.2
2.3 OpenCL C version: OpenCL C 1.2 (Build 56860)
2.4 Parallel compute units: 236

At this point I could be sure, that the Phi is ready to work, but the next question was how to use it. After a while of reading the OpenCL Documentation I got a hint to the device type CL_DEVICE_TYPE_ACCELERATOR and with that I was able to calculate on Phi. The following code shows how I initialize my device (declarations and error handling ommited).

context = clCreateContextFromType(cprops,                                                                                                                                                                          
    CL_DEVICE_TYPE_ACCELERATOR,                                                                                                                                                                                
    NULL,                                                                                                                                                                                                      
    NULL,                                                                                                                                                                                                      
    &status);
 
status = clGetContextInfo(context,                                                                                                                                                                                         
    CL_CONTEXT_DEVICES,                                                                                                                                                      
    0,                                                                                                                                                                       
    NULL,                                                                                                                                                                    
    &deviceListSize);
 
devices = (cl_device_id *)malloc(deviceListSize);
 
commandQueue = clCreateCommandQueue(                                                                                                                                                                                       
    context,                                                                                                                                                                                
    devices[0],                                                                                                                                                                             
    CL_QUEUE_PROFILING_ENABLE,                                                                                                                                                              
    &status);

Now with device[0] I can do the rest of the initialization work.

OpenCL Results
For comparision between the installed Intel Xeon E5-2670 and the Xeon Phi I used a raytracer which operates within Geometric Algebra (GA). It was developed in my Master Thesis [3] and modified and ported to linux for this test. I will show the testscenes and present the results in the following. The profiling was done like in my thesis by using the OpenCL framework’s methods [4]. The resolution for every scene is 1024*1024.

Raptor
raptor
This small dinosaur consists of 100000 triangles. The model was only raycasted so that no reflection rays where used. The Xeon Phi needed 2,98 seconds to render this image, the Xeon E5 only 2,58.

3 Bunnys and an Elephant
3bunnys1ele
This scene consists of 16150 triangles (each bunny 4968, elephant the rest). The Xeon E5 needs 2,47 seconds and the Xeon Phi 2,17. So at least the Phi can outperform the E5.

CowSphere
Cowsphere
This rather small scene consists of only about 6000 triangles, but the calculation is dominated by the high amount of shadow. It was tested in two ways. First with use of bounding spheres to reduce the account of ray-triangle intersection tests (like all scenes until now) and in a second step without bounding volumes. In this case for every pixel (e.g. the corresponding eye-ray to it) a test with every triangle in the scene has to be done. For the first variant the Phi renders the picture within 2,11 sec, while the E5 is ready after 2,96 seconds. For the second, the Phi needs 4,76 sec and the E5 6,46 seconds.

Kittens
Kitten
This last scene is the most complex one. Each kitten is built by 137098 triangles and the bounding spheres are disabled, while the scene is not only raycasted but completely raytraced. So for every of the 1048576 pixels over 250000 triangles must be tested for intersection (in case of a hit this value doubles). The Phi finishes this task after 142 seconds, while the Xeon E5-2670 has the result after 177 seconds.

Summary
It looks like it was in the other three parts of this serie before: Without changing or rewriting existing code it seems impossible to exploit the Xeon Phi’s potential. In contrast to my tries in offload and native mode with C++ code the Phi is able to render faster than the server processor in most of the scenes, but his advance is not that big. More than ever if I take the results on my AMD HD6970 from my Master Thesis into account, the calculation of the scenes on the Xeon Phi is slow.

Sources:
[1] http://www.theismus.de/HPCBlog/?p=81
[2] http://dhruba.name/2012/08/14/opencl-cookbook-listing-all-devices-and-their-critical-attributes/
[
3] http://www.gaalop.de/wp-content/uploads/Masterarbeit-Michael-Burger.pdf
[
4] http://software.intel.com/sites/landingpage/opencl/optimization-guide/Profiling_Operations_Using_OpenCL_Profiling_Events.htm

First real life experiment with the Xeon Phi III

After I reported my experiences with native mode of the Phi I now did my first steps in the offload mode. I used the same raytracer as in the first two parts of this article serie. A few pitfalls revealed during my tries to get the application running in offload mode.

Changes needed in the raytracer
First off all I had to modify the code, so that it is able to compile with the offload pragmas. There I noticed some difficulties especially for C++ Code.

First off all there is the issue that the Phi can get to know the used classes. For the raytracing procedure and the existing code this affects all classes of the project. So for calculating the colour of a pixel the Phi must now the scene and ist objects, in this implementation called Shapes. Additionally it needs to know what the Color class is, which it should have as output. Furthermore LightsRays, Vectors are Points are required. So I had to tell the compiler for all of this classes, that he has to offload them. This is done by surrounding the class definitions with the offload attribute pragma:

#pragma offload_attribute (push,target(mic))
// includes

class Color
{
  public:
  …

  private:
  …
};
#pragma offload_attribute (pop)

This changes had to be applied to all header files, so you need more than just one pragma to offload code to the Phi.

Another problem was the fact that my image array for the calculated picture was declared as img[Height][Width][3]. I did some small examples with multi dimensional arrays and try to fill them von Phi, but this results in crashes during execution. I don’t know if this was my error or if the Phi ( / the compiler) isn’t capable of dealing with such constructs. So I had to change the code so that he uses a 1D array now. To offload and fill this with testdata was no problem.

After this step I took the actual rendering loop and offloaded it to Phi with:

#pragma offload target (mic) in(argc) \
    out(img1d : length(HEIGHT*WIDTH*3))

The last problem was the biggest one. After offloading the class structure and changing the structure of the output image the code could be compiled. Trying to execute it ended up with a crash, that Phi returned:

offload error: process on the device 0 was terminated by signal 11

Since I don’t know how to debug the Phi at the moment, I located the problem by commenting out code and uncommenting it step by step. The reason for the crashes was very evident, when I think to it afterwards. The classed structure and there function is copied to Phi yes, but not all their members. Simple ints and doubles are copied automatically but the list for the shapes was empty. I searched in the internet for a lot of time for finding an easy way to copy the hole instance of a class to Phi, but I wasn’t successful on that. A look in one of Intels own examples destroyed my hope altogether. You can find it in the directory of Intels 2013 version of the Composer:

There you can find and example of offloading a struct to Phi. Commented with:

// The first version of the offload implementation does not support copying
// of classes/structs that are not simply bit-wise copyable
//
// Sometimes it is necessary to get a struct across
//
// This needs to be done by transporting the members individually
// and reconstructing the struct on the other side

So this means for me: I would have to decompose the hole shapes class and its inharitors to simple arrays or single variables, to copy them separately and to reassemble it on the Phi. I refused this way because of the amount of work. So I used a second method: I increased the code region within the offload pragma so that it additionally includes the creation of the scene. So the scene is instantiated from a single Xeon Phi Core and directly put in Phi’s RAM. The results are presented in the next section.

A last problem I was engaged in was the writing of the resulting image to file. Since the array was a member of the Raytracer class and this class was instantiated on the Phi directly, it was not possible to write this data after the offload region. But this must be done so that the stream is written to the hard disk of my host system. So I had to instantiate the output array first, pass it within the offload pragma as out-parameter and internally copy / link it to the member variable of the raytracer in the space of the offloaded code. Then after the region I write the stream to file.

Results
Since the code changes these results are not comparable to the older once from parts I and II. But again only the time for the actual rendering loop is measured. I also changed the scene a litte bit. But the reached times are more than disappointing.

Xeons on Host:
1 Thread: 46.238833 sec
2 Threads: 23.850293 sec
4 Threads: 12.371241 sec
8 Threads: 6.942405 sec
16 Threads: 4.752595 sec
32 Threads: 3.586519 sec

Xeon Phi:
30 Threads: 34.608027 sec
40 Threads: 27.258293 sec
60 Threads: 24.582100 sec
120 Threads: 18.004286 sec
240 Threads: 15.859062 sec

Xeon Phi (native)
30 Threads: 29.427415 sec
40 Threads: 22.920789 sec
60 Threads: 21.599124 sec
120 Threads: 14.557700 sec
240: Threads: 13.837122 sec

The native version is slightly faster than the offloaded one and both are much slower than the run on the host. The new scene can be seen at the following picture:

offload

In a next step I will first try to find better solutions for debugging the Phi than commenting and uncommenting. I will test the eclipse plugin which is shipped with the mpss package from Intel.

Sources:
http://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/index.htm

First real life experiment with the Xeon Phi II

In the previous part if this article [1] I mentioned that the next step would be an analyses of the bad scalability and so performance of the raytracer on Xeon Phi. As a first step for this I used the Intel vTuneAmplifier to search for hotspots in the code algorithm. But there seem to be no abnormalities in the execution flow. But it is conspicuous that the Amplifier states, that the CPU time in the running threads is rather low. On the other side, the overall summation of the results looks pretty good:

simultaneous_threads

 

simultaneous_cpus

So I decided to use Intel Inspector first. The normal analyses reported no errors. After that I increased the search depth and anayses form. Since the analyses wasn’t finished after seven minutes I terminated it. I got two data race errors. One within the Shading and one within the Rendering function. So I disabled the shading and tried to eliminate the data race in Rendering with changing my OpenMP clause. Thats why the resulting image looks like:

trace_without_shade

The results on the double Xeon Server processor system are:

  • 1 Thread: 35.769255 sec
  • 2 Threads: 18.427898 sec
  • 4 Threads: 10.145121 sec
  • 8 Threads: 6.403982 sec
  • 16 Threads: 3.907015 sec
  • 32 Threads: 3.667761 sec

The corresponding values for the Phi are the following:

  • 1 Thread: 573.422251 sec
  • 2 Threads: 288.445928 sec
  • 4 Threads: 156.622805 sec
  • 8 Threads: 98.961222 sec
  • 15 Threads: 54.404671 sec
  • 30 Threads: 34.617849 sec
  • 60 Threads: 22.450361 sec
  • 120 Threads: 15.535183 sec
  • 240 Threads: 10.441986 sec

You can see that the overall performance in comparision to the original version with shading increases, but the scaling problem remains the same. So the Phi still isn’t able to outperform the dual socket cluster node. Though the shading is not the problem for the scaling or the general performance on Phi. The search goes on…

Sources:
[1] http://www.theismus.de/HPCBlog/?p=17