Parallel languages have been focused towards performance, but it alone is not be sufficient to overcome the barrier of
developing software that exploits the power of evolving architectures. DARPA initiated high productivity computing systems
(HPCS) languages project as a solution which addresses software productivity goals through language design. The resultant
three languages are Chapel from Cray, X10 from IBM and Fortress from Sun. We recognize memory model as a classifier for parallel languages and present details on shared, distributed, and
partitioned global address space (PGAS) models. Next we compare HPCS languages in detail through idioms they support
for five common tasks in parallel programming, i.e. data parallelism, data distribution, asynchronous remote task creation,
nested parallelism, and remote transactions.
The full paper on this including working code is available at http://grids.ucs.indiana.edu/ptliupages/publications/Survey_on_HPCS_Languages_formatted_v2.pdf
I just wonder what the outcome of the HPCS project is. Did the project intend to select one language (as well as a parallel computer) as the "winner"? But it seems neither Chapel nor X10 was declared a winner.
ReplyDelete