|
Research
Here is a list of the research projects I am working/have worked on recently.
Miga Labs is a group specialized in next-generation
Blockchain technology. Our research team is located at the
Barcelona Supercomputing Centre. The focus of our work is
Sharding and Proof-of-Stake protocols.
In order to increase scalability, Eth2 will be a
composition of many blockchains, called shards, connected
through a back-bone, called the beacon chain. This new
architecture poses pressure on different parts of the
protocol and open new forms of complexity. At Miga Labs we
are building systems to make sense of the vast amount of
data coming from the Eth2 network.
FTI stands for Fault Tolerance Interface and is a library
that aims to give computational scientists the means to
perform fast and efficient multilevel checkpointing in
large scale supercomputers. FTI leverages local storage
plus data replication and erasure codes to provide several
levels of reliability and performance. FTI is
application-level checkpointing and allows users to select
which datasets needs to be protected, in order to improve
efficiency and avoid wasting space, time and energy. In
addition, it offers a direct data interface so that users
do not need to deal with files and/or directory names. All
metadata is managed by FTI in a transparent fashion for the
user. If desired, users can dedicate one process per node
to overlap fault tolerance workload and scientific
computation, so that post-checkpoint tasks are executed
asynchronously.
At the crossroads of the energy and digital revolutions,
EoCoE develops and applies cutting-edge computational
methods in its mission to accelerate the transition to the
production, storage and management of clean, decarbonized
energy. The aim of this project is to establish an Energy
Oriented Centre of Excellence for computing applications,
(EoCoE). EoCoE (pronounce “Echo”) uses the prodigious
potential offered by the ever-growing computing
infrastructure to foster and accelerate the European
transition to a reliable and low carbon energy supply. To
achieve this goal, we believe that the present revolution
in hardware technology calls for a similar paradigm change
in the way application codes are designed. EoCoE assistis
the energy transition via targeted support to four
renewable energy pillars: Meteo, Materials, Water and
Fusion, each with a heavy reliance on numerical modelling.
These four pillars are anchored within a strong transversal
multidisciplinary basis providing high-end expertise in
applied mathematics and HPC.
The eProcessor 3-year project aims to build a new open
source OoO processor and deliver the first completely open
source European full-stack ecosystem based on this new
RISC-V CPU. eProcessor technology will be extendable (open
source), energy efficient (low power), extreme-scale (high
performance), suitable for uses in HPC and embedded
applications, and extensible (easy to add on-chip and/or
off-chip components). The project is an ambitious
combination of processor design, based on the RISC-V open
source hardware ISA, applications and system software,
bringing together multiple partners to leverage and extend
pre-existing Intellectual Property (IP), combined with new
IP that can be used as building blocks for future HPC
systems, both for traditional and emerging application
domains.
Due to fundamental limitations of scaling at the atomic
scale, coupled with heat density problems of packing an
ever increasing number of transistors in a unit area,
Moore’s Law has slowed down. Heterogeneity aims to solve
the problems associated with the end of Moore’s Law by
incorporating more specialized compute units in the system
hardware and by utilizing the most efficient compute unit
for each computation. However, while software-stack support
for heterogeneity is relatively well developed for
performance, for power- and energy-efficient computing it
is severely lacking. The primary ambition of the LEGaTO
project is to address this challenge by starting with a
Made-in-Europe mature software stack, and optimizing this
stack to support energy-efficient computing on a commercial
cutting-edge European-developed CPU–GPU–FPGA heterogeneous
hardware substrate and FPGA-based Dataflow Engines (DFE),
which will lead to an order of magnitude increase in energy
efficiency.
We developed an open-source discrete-event blockchain
simulator where different protocols/techniques, such as
sharding, are simulated and studied with synthetic traces.
The simulator includes features such as byzantine nodes,
and other types of attacks. The simulator runs in multiple
processes in a distributed system communicating through
MPI, each MPI rank simulating one node of the network. The
simulator scales to thousands of processes in a
supercomputer and simulates thousands of blocks.
Compute efficiency and energy efficiency are more than ever
major concerns for future Exascale systems. Since October
2011, the aim of the European projects called Mont-Blanc
has been to design a new type of computer architecture
capable of setting future global HPC standards, built from
energy efficient Arm solutions. Phases 1 (2011-2015) and 2
(2013-2016) of the project were coordinated by the
Barcelona Supercomputing Center (BSC). They investigated
the usage of low-power Arm processors for HPC and gave rise
to the world’s first Arm-based HPC cluster, which helped
demonstrate the viability of using Arm technology for HPC.
The third phase of the Mont-Blanc project started in
October 2015: it is coordinated by Atos (formerly Bull). It
aims at designing a new high-end HPC platform that is able
to deliver a new level of performance / energy ratio when
executing real applications. Finally, Mont-Blanc 2020 is a
spin-off of the previous projects. It is coordinated by
Atos and started in December 2017. It ambitions to trigger
the development of the next generation of industrial
processor for Big Data and High Performance Computing.
How does one cover the needs of both HPC (high performance
computing) and HPDA (high performance data analytics)
applications? Which hardware and software technologies are
needed? How should these technologies be combined so that
very different kinds of applications are able to
efficiently exploit them? And how can we – on the way –
tackle some of the challenges posed by next-gen
supercomputers of the Exascale class, like energy
efficiency? These are the questions the EU-funded project
DEEP-EST addresses with it’s Modular Supercomputing
architecture.
Modern scientific technology such as particle accel-
erators, telescopes, and supercomputers are producing
extremely large amounts of data. That scientific data needs
to be processed by using systems with high computational
capabilities such as supercomputers. Given that the
scientific data is increasing in size at an exponential
rate, storing and accessing the data are becoming expensive
in both time and space. Most of this scientific data is
stored by using floating point representation. Scientific
applications executed on supercomputers spend a large
amount of CPU cycles reading and writing floating point
values, making data compression techniques an interesting
way to increase com- puting efficiency. Given the accuracy
requirements of scientific computing, we only focus on
lossless data compression. In this project we propose a
masking technique that partially decreases the entropy of
scientific datasets, allowing for a better compression
ratio and higher throughput. We evaluate several data
partitioning techniques for selective compression and
compare these schemes with several existing compression
strategies. Our approach shows up to 15% improvement in
compression ratio while reducing the time spent in
compression by half time in some cases.
|
|
|