Open Systems, 2013, Volume 21, Number 8COVER FEATURES
HIGH PERFORMANCE COMPUTING: ON THE VERGE OF EXAFLOPS

From Data to Knowledge
Leonid Chernyak
Traditional supercomputers, however powerful, are still relics of the past century when the computing scientific paradigm was first introduced. Today, a need is emerging for computers of other type, capable of turning data directly into knowledge.

The Promising Features of Tianhe-2
Dmitriy Andryushin, Viktor Gorbunov, Leonid Eysimont
Current supercomputing performance record holders Tianhe-2 and Tianhe-1A could be rightfully considered forerunners of future exascale systems. The main feature of the Chinese supercomputers is hybrid compute nodes. Analyzing Tianhe's architecture can help understand strengths and weaknesses of the hybrid approach and predict the future direction HPC worldwide industry.

ExaScale Systems Programming
Leo Gervich, Boris Steinberg, Mikhail Yurushkin
The requirements for exascale systems software are usually inferred based on the assumption that those supercomputers would have complicated interprocessor communication scheme and memory organization. What ways of speeding up execution are available today for software run on a multithreaded supercomputer with a complex memory hierarchy?

PLATFORMS
Heterogeneous System Architecture: CPU/GPU/DSP and other
Timour Paltashev, Ilya Perminov
All modern PCs and mobile devices combines CPU and GPU cores that was designed to run their own tasks very well. But is there a way to make CPU and GPU work together more efficiently, to accelerate such mainstream parallel applications like face detection and rigid body physics? Is there a way to achieve better performance and yet save power while supporting existing programming models? The Heterogeneous System Architecture (HSA) aims to it.

IT MANAGEMENT
Importance of Roles in Business Process Model
Igor Fyodorov
Analysts usually pay much attention to the definition of workflow, but often fail to describe in sufficient detail the interplay of the people doing the work. However, it is the work distribution between business process participants that define its success to a large extent.

BPM for All
Boris Zinchenko, Heinz-Jurgen Scherer
The more organization employees are engaged in business process modeling, the better. Not everyone has access to specialized BPM systems, but this is not always necessary: in some cases, the features offered by productivity applications like Visio or SharePoint turn out to be no worse than those of professional BPM tools.

CLOUD COMPUTING
Social-Network-Sourced Big Data Analytics
Wei Tan, M. Brian Blake, Iman Saleh, Schahram Dustdar
Very large datasets, also known as big data, originate from many domains. Deriving knowledge is more difficult than ever when we must do it by intricately processing this big data. Leveraging the social network paradigm could enable a level of collaboration to help solve big data processing challenges. Here, the authors explore using personal ad hoc clouds comprising individuals in social networks to address such challenges.

SOFTWARE ENGINEERING
PaaS: New Opportunities for Cloud Application Development
Beth Cohen
With cloud technology, platform as a service (PaaS) offers enterprise customers multiple potential possibilities for application development.

APPLICATIONS
Using GPUs in Machine Learning Tasks
Igor Kuralyonok, Aleksander Shchekalev
While a powerful instrument for processing large data volumes, machine learning requires choosing a balance between resulting models quality and the time it takes to calculate them. Meanwhile, graphic accelerators allow the use of parallel execution for many algorithms, as is the case with the self-learning subsystem for search results ranking developed by Yandex.

EXTREME TECHNOLOGY
The Second Birth of Grid
Leonid Chernyak
For a long time, the term "grid computing" has been referring either to volunteers' compute resources brought together or to a collaborative networked environment used for carrying out scientific calculations. Today, however, the term increasingly gets a new meaning, that of integrating memory resources of multiple clustered servers into a unified distributed infrastructure to speed up the real-time processing of data residing in large-volume memory structures.

OPINION
Multicore Dead-end: There is a Way Out
Vyacheslav Lyubchenko
Today's perception of parallel programming is as far away from the real parallel programming as the summer is from winter: only those fail to distinguish between the two seasons who is able to tell them apart just based on calendar dates.

OS GUESTROOM
Conquering a Competitor in the Name of Oneself
Natalya Dubova
Director of IBM Mobile Platform Development Daniel Yellin speaks on the company's mobile strategy.

OS ACADEMY. LIBRARY
The Visual Future of Analytics and Non-Volatile Memory
Sergey Kuznetsov
The topics of July and August issues of Computer Magazine (IEEE Computer Society, Vol. 46, No. 7, 8, 2013) are Visual Analytics and New Memory Technologies.