Where can buy Distributed Applications and Interoperable Systems 20th IFIP WG 6 1 International Conference DAIS 2020 Held as Part of the 15th International Federated Conference on Distributed Computing Techniques DisCoTec 2020 Valletta Malta June 15 19 2020 Pr Anne Remke ebook with cheap price
Where can buy Distributed Applications and Interoperable Systems 20th IFIP WG 6 1 International Conference DAIS 2020 Held as Part of the 15th International Federated Conference on Distributed Computing Techniques DisCoTec 2020 Valletta Malta June 15 19 2020 Pr Anne Remke ebook with cheap price
com
OR CLICK BUTTON
DOWNLOAD NOW
Distributed Applications
and Interoperable Systems
20th IFIP WG 6.1 International Conference, DAIS 2020
Held as Part of the 15th International Federated Conference
on Distributed Computing Techniques, DisCoTec 2020
Valletta, Malta, June 15–19, 2020, Proceedings
Lecture Notes in Computer Science 12135
Founding Editors
Gerhard Goos
Karlsruhe Institute of Technology, Karlsruhe, Germany
Juris Hartmanis
Cornell University, Ithaca, NY, USA
Distributed Applications
and Interoperable Systems
20th IFIP WG 6.1 International Conference, DAIS 2020
Held as Part of the 15th International Federated Conference
on Distributed Computing Techniques, DisCoTec 2020
Valletta, Malta, June 15–19, 2020
Proceedings
123
Editors
Anne Remke Valerio Schiavoni
University of Münster University of Neuchâtel
Münster, Germany Neuchâtel, Switzerland
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
This volume contains the papers presented at the 20th IFIP International Conference on
Distributed Applications and Interoperable Systems (DAIS 2020), sponsored by the
IFIP (International Federation for Information Processing) and organized by the IFIP
WG6.1. The DAIS conference series addresses all practical and conceptual aspects of
distributed applications, including their design, modeling, implementation, and oper-
ation, the supporting middleware, appropriate software engineering methodologies and
tools, as well as experimental studies and applications.
DAIS 2020 was meant to be held during June 15–19, 2020, in Valletta, Malta, as
part of DisCoTec, the 15th International Federated Conference on Distributed Com-
puting Techniques. However, due to the COVID-19 pandemic, the organizers decided
to turn the conference into a virtual event to be held completely online.
There were 30 initial abstract registrations for DAIS, which were then followed by
17 full papers. Each submission was reviewed by up to three Program Committee
(PC) members. The review process included an in-depth discussion phase, during
which the merits of all papers were discussed by the PC. The committee decided to
accept ten full papers, one short paper, and one invited paper.
Accepted papers address challenges in multiple application areas, including system
support for machine learning, security and privacy issues, experimental reproducibility
and fault-tolerance, as well novel networking approaches for future network genera-
tions. Researchers continue the trend of focusing on trusted execution environments,
for instance in the case of database systems. Instead, we notice fewer research efforts
devoted to blockchain topics.
The virtual conference, especially during these last months full of unpredictable
events, was made possible by the work and cooperation of many people working in
several committees and organizations, all of which are listed in these proceedings. In
particular, we are grateful to the Program Committee members for their commitment
and thorough reviews and for their active participation in the discussion phase, and all
the external reviewers for their help in evaluating submissions. Finally, we also
thankful to the DisCoTec general chair, Adriano Francalanza, and the DAIS Steering
Committee chair, Rui Oliveira, for their constant availability, support, and guidance.
General Chair
Adrian Francalanza University of Malta, Malta
Steering Committee
Rocco De Nicola IMT Lucca, Italy
Pascal Felber University of Neuchâtel, Switzerland
Kurt Geihs University of Kasel, Germany
Alberto Lluch Lafuente DTU, Denmark
Kostas Magoutis ICS-FORTH, Greece
Elie Najm (Chair) Télécom ParisTech, France
Manuel Núñez Universidad Complutense de Madrid, Spain
Rui Oliveira University of Minho, Portugal
Jean-Bernard Stefani Inria Grenoble, France
Gianluigi Zavattaro University of Bologna, Italy
Program Committee
Pierre-Louis Aublin Keio University, Japan
Sonia Ben Mokhtar LIRIS-CNRS, France
Sara Bouchenak INSA, France
Antoine Boutet INSA, France
Silvia Bonomi Università degli Studi di Roma La Sapienza, Italy
Damiano Di University of Cambridge, UK
Francesco Maesa
Davide Frey Inria, France
Paula Herber University of Münster, Germany
Mark Jelasity University of Szeged, Hungary
Evangelia Kalyvianaki University of Cambridge, UK
Vana Kalogeraki Athens University of Economics and Business, Greece
Rüdiger Kapitza Technical University of Braunschweig, Germany
João Leitão Universidade Nova de Lisboa, Portugal
Daniel Lucani Aarhus University, Denmark
Miguel Matos INESC-ID, University of Lisboa, Portugal
Kostas Magoutis University of Ioannina, Greece
x Organization
Additional Reviewers
Isabelly Rocha University of Neuchâtel, Switzerland
Philipp Eichhammer University of Passau, Germany
Christian Berger University of Passau, Germany
Vania Marangozova-Martin IMAG, France
Distributed Algorithms
Hugo Carvalho(B) , Daniel Cruz, Rogério Pontes, João Paulo, and Rui Oliveira
1 Introduction
Data analytics plays a key role in generating high-quality information that
enables companies to optimize the quality of their business while presenting
several advantages such as making faster business decisions, predicting users
2 Background
This section describes the cryptographic techniques we use and their security
guarantees as well as the Intel SGX technology.
On the Trade-Offs of Combining Multiple Secure Processing 5
Enclaves also provide sealing capabilities that allow encrypting and authenti-
cating the data inside an enclave so that it can be written to persistent memory
without any other process having access to its contents. Also, SGX relies on
software attestation, which ensures that critical code is running within a trusted
enclave. One of the main advantages of SGX against its predecessors is its lower
Trusted Computing Base (TCB). This factor defines the set of components, such
as hardware, firmware, or software components that are considered critical to
system security. With SGX, TCB only includes the code that users decide to run
inside their enclave. Thus, SGX provides security guarantees for attacks from
malicious software running on the same computer.
SafeSpark considers a trusted and untrusted site. The Spark client resides on the
trusted site (e.g.: private infrastructure) and the Spark cluster is deployed on the
untrusted one (e.g.: public cloud). We assume a semi-honest, adaptive adversary
(internal attacker) with control over the untrusted site, with the exception of
the trusted hardware. The adversary observers every query, its access patterns
and can also replay queries. However, our model assumes that the adversary is
honest-but-curious and thus does not have the capability of modifying queries
nor their execution. The parameters and results of queries are encrypted with a
secret key only available to the client and enclaves.
3 Related Work
Current secure data analytics platforms fall into two broad approaches. One, like
the Monomi [33] system, resort to cryptographic schemes such as DET and OPE
to query sensitive data on untrusted domains. The other, relies on hardware-
based protected environments.
Monomi, in particular, splits the execution of complex queries between the
database server and the client. The untrusted server executes part of the query,
and when the remaining parts cannot be computed on the server or can be more
efficiently computed on the client-side, the encrypted data is sent to the client,
which decrypts it and performs the remaining parts of the query. Seabed [25] has
a similar approach with an architecture based on splitting the query execution
between the client and the server. This platform proposes two new cryptographic
schemes, ASHE and SPLASHE which allow executing arithmetic and aggrega-
tion operations directly over the cryptograms.
Contrarily, VC3 [31] and Opaque [35] follow a trusted hardware approach.
Namely, they use Intel SGX [16] to create secure enclaves where sensitive data
can be queried in plaintext without revealing private information. VC3 uses SGX
to perform secure MapReduce operations in the cloud, protecting code and sen-
sitive data from malicious attackers. Opaque is based on Apache Spark and adds
new operators that, in addition to ensuring the confidentiality and integrity of
the data, ensure that analytical processing is protected against inference attacks.
On the Trade-Offs of Combining Multiple Secure Processing 7
4 Architecture
SafeSpark’s architecture is based on the Apache Spark platform [10], which cur-
rently does not support big data analytics with confidentiality guarantees. In this
section, we describe a novel modular and extensible architecture that supports
the simultaneous integration of cryptographic and trusted hardware primitives.
RAM). When all the tasks are completed, the result is sent from the Spark
Workers to the Driver Program, which returns the output to the clients.
4.2 SafeSpark
SafeSpark extends Spark’s architecture [10] by integrating multiple secure pro-
cessing primitives that can be combined to offer different performance, secu-
rity and functionality trade-offs to data analytics applications. Figure 1 shows
the proposed architecture which contemplates four new components: SafeSpark
Worker, Handler, CryptoBox and SafeMapper.
During the Data Storage phase, sensitive data is encrypted on the trusted
site before being uploaded to the untrusted Spark data source. For this, the
user must first specify in a configuration file how the data will be represented
in a tabular form. Then, for each data column, the user will specify the type of
cryptographic scheme (e.g. STD, DET, OPE) or trusted hardware technology
(e.g. Intel SGX) to be employed.
The SafeMapper module is responsible for parsing the information contained
in the configuration file and forwarding it to the SafeSpark Worker. The latter
will intercept the plaintext data being uploaded to the untrusted data source and
will encrypt each data column with the specified secure technique. The conver-
sion of plaintext to encrypted data is actually done by the Handler component,
which provides encode() and decode() methods for encrypting and decrypting
information, respectively. Moreover, the Handler uses modular entities, called
CryptoBoxes, each one corresponding to a different cryptographic technique or
On the Trade-Offs of Combining Multiple Secure Processing 9
To exemplify the flow of operations in our platform let us consider the use-case
of a company that wishes to store and query their employees’ information in a
third-party cloud service. The company’s database will have an Employees table
holding the Salary, Age, and Category of each employee (database columns).
These columns contain sensitive information so the company’s database admin-
istrators define a secure schema using SGX for the Salary, OPE for the Age and
DET for the Category.
Firstly, the database’s information must be uploaded to the corresponding
cloud service (➀). Given the sensitive nature of this data, the upload request is
intercepted by the SafeSpark Worker (➁) that initializes the SGX, OPE, and
DET CryptoBoxes specified in the configuration schema (➂), while using them
to encrypt the corresponding data columns (➃). The resulting encrypted data
columns (➄) are then uploaded into the untrusted data storage source (➅).
Note that for encrypting data with the SGX technology, we consider a sym-
metric cipher similar to the STD scheme. During SafeSpark’s bootstrap phase,
the client application, running on the trusted premises must generate this key
and exchange it with the enclave, through a secure channel, so that encrypted
data can be decrypted inside the secure enclave and the desired operations can
be done privately over plaintext data. This paper tackles the architectural chal-
lenges of integrating Intel SGX and other cryptographic primitives in Spark.
Thus, we do not focus on the protocols of secure channel establishment or
key exchange between clients and remote enclaves. Such challenges have been
addressed in [11,27], which SafeSpark can rely upon in a real-world instantiation
and that would not require any code changes at Spark’s core codebase.
After completing the database’s loading phase, clients can then query the
corresponding information. Let us consider a SQL query that averages employees’
salaries who are between 25 and 30 years and then groups the results by category.
10 H. Carvalho et al.
By sending the query through the Spark Context (➊), the request is inter-
cepted by the SafeSpark Worker, which verifies the user-defined configuration
file (➋), checking whether it is necessary to change the query, in order to invoke
secure operators from CryptoBoxes (➌). Since stored values for the column Age
are encrypted with OPE, the SafeSpark Worker encrypts the values “25”and
“30” by resorting to the same OPE CryptoBox and key. Moreover, as the
Salary column is encrypted using SGX, the operation avg needs to be performed
within secure SGX enclaves. Therefore, SafeSpark provides a new operator that
allows computing the average within SGX enclaves, while the SafeSpark Worker
replaces the common operator avg by this new operator (AVG SGX ).
Then, after protecting sensitive query arguments at the trusted premises,
the request is sent to the untrusted premises, namely to the Cluster Manager,
which dispatches the tasks to Spark Workers (➎). Since the GROUP BY and
BETWEEN operators internally perform equality and order comparison opera-
tions, and considering that Category and Age columns are encrypted with DET
and OPE schemes, Spark is able to execute the operation directly over cipher-
texts. However, the operation avg needs to be executed by the SafeSpark Workers
using the process() method of the CryptoBox SGX (➏). At the SGX enclave,
this method receives the input data to calculate avg and decrypts it with the
previously agreed key. Then it does the avg calculation in plaintext and encrypts
the result before sending it back to the untrusted Spark engine.
The query’s encrypted result is sent to the Spark Client (➐) and intercepted
by SparkWorker that, based on the SafeMapper component (➑), decrypts it
using the appropriate CryptoBox (➒). Lastly, the plaintext result is sent back
to the client (➓).
5 Implementation
6 Experimental Evaluation
Data Mining class. TPC-DS was configured with a 10× scale factor, correspond-
ing to a total of 12 GB of data to be loaded into Spark’s storage source.
We performed ten runs of each TPC-DS query for Spark Vanilla, which com-
putes over plaintext data, and for the different SafeSpark setups, which run on
top of encrypted data. For each query, we analyzed the mean and standard devia-
tion of the execution time. Also, the dstat framework [2] was used at each cluster
node to measure the CPU and memory consumption, as well as the impact on
disk read/write operations and on the network traffic. Moreover, we analyzed
the data storage times and the impact of encrypted data on storage space.
SafeSpark-SGX. This setup aims at maximizing the usage of SGX for doing
queries over sensitive information at the TPC-DS database schema. Thus,
the data columns which are used within arithmetic operations or filters of
equality and order were encrypted using SGX. The OPE scheme was used
for all the columns contemplating ORDER BY operations since this type of
operation is not supported by the SGX operator, as explained in Sect. 5.2.
For the remaining columns contemplating equality operations as GROUP
BY or ROLL OUT, we used the DET scheme.
SafeSpark-OPE. This scenario aims at maximizing the use of cryptographic
schemes, starting by using OPE and followed by the DET scheme. Therefore,
in this case, SGX was only used for operations that are not supported by
DET and OPE, namely arithmetic operations, sums or averages. Thus, OPE
was used for all the operations containing order and equality comparisons,
as ORDER BY, GROUP BY or BETWEEN clauses. For the remaining
columns, that only require equality operations, the DET scheme was used.
SafeSpark-DET. As in the previous scenario, this one also maximizes the use
of cryptographic schemes. However, it prioritizes the DET primitive instead
of the OPE one, thus reducing the number of OPE columns that were being
used in GROUP BY and ROLL UP operations in the previous scenario.
Thus, SGX was only used for operations not supported by OPE or DET
primitives. For columns that need to preserve equality, we used DET. For
columns requiring order comparisons, we used the OPE scheme. In some
cases, it was necessary to duplicate some columns already protected with
the DET scheme. For example, when a column is targeted simultaneously
by a GROUP BY (equality) and ORDER BY (order) operation.
6.3 Results
This section presents the results obtained from the experimental evaluation.
The impact shown throughout the storage phase can be explained by the use
of the OPE scheme to encrypt data since it has a longer encoding time comparing
with the other schemes, especially when the plaintext size is larger [22]. Also, the
cryptograms produced by this scheme are significantly larger than the original
plaintext, which can sustain the observed increase for the stored data size. In
some situations, the cryptogram’s size increases up to 4× when compared to the
size of the original plaintext. It is important to note that all setups resort to
the OPE primitive. However, SafeSpark-SGX is the setup that uses least this
primitive and so has the fastest loading time. On the other hand, SafeSpark-DET
has a higher loading time because it duplicates some columns to incorporate both
DET and OPE primitives, as explained in Sect. 6.2.
6.3.2 Latency
Figure 2 presents the query latency results for the three SafeSpark configurations
and Vanilla Spark. The values reflect each query execution time, as well as the
time used to encrypt the query’s parameters and to decrypt the query results
when these are sent back to the client.
As expected, SafeSpark has worse performance than Spark Vanilla due to the
secure primitives performance overhead. The SafeSpark-SGX scenario exhibits
the highest overhead, while its best result occurs for query 24 with a 1.54×
penalty and the worst for query 82 with a 4.1× penalty. These values can be
justified by two factors. First, this scenario maximizes the use of SGX to protect
data, leading to a wide number of data transfer and processing operations being
executed within the SGX enclaves. We noted that, for example, query 31 has
On the Trade-Offs of Combining Multiple Secure Processing 15
150
Spark Vanilla
SafeSpark-SGX
125 SafeSpark-OPE
SafeSpark-DET
100
Latency (s)
75
50
25
0
Q24 Q27 Q31 Q37 Q40 Q46 Q73 Q82
Queries
The results also show that SafeSpark-DET alleviates the penalty of decrypt-
ing data, by reducing the usage of the OPE scheme and maximizing the usage
of the DET scheme. Consequently, as the number of values decrypted with the
OPE scheme decreases, so it does the query execution time.
The CPU and memory consumption does not show notable changes, even
considering the process of decrypting the query results and the computational
power used by Intel SGX. The worst CPU consumption result occurred on
query 40 with SafeSpark-DET, presenting an overhead 31%, when compared
to Vanilla Spark. Regarding memory consumption, the worst overhead was 10%
for SafeSpark-SGX, also on query 37.
SafeSpark has an impact on disk and network I/O. Query 46 with SafeSpark-
SGX shows an overhead of 107% on disk reads, and query 82 with SafeSpark-
DET has a 97% overhead on disk writes, when compared with Spark Vanilla.
Finally, network traffic has the highest impact on query 46 with SafeSpark-DET
(approximately 87%). These overheads are justified by the fact that cryptograms
generated by SafeSpark, which will be sent through network channels and stored
persistently, are larger than plaintext data. This is even more relevant when using
the OPE scheme as it generates larger cryptograms.
On the Trade-Offs of Combining Multiple Secure Processing 17
6.4 Discussion
7 Conclusion
This paper presents SafeSpark, a modular and extensible secure data analytics
platform that combines multiple secure processing primitives to better handle the
performance, security, and functionality requirements of distinct data analytics
18 H. Carvalho et al.
References
1. The cambridge analytical files. https://www.theguardian.com/news/series/cambri-
dge-analytica-files. Accessed 2019
2. Dstat: versatile resource statistics tool. http://dag.wiee.rs/home-made/dstat/. Acc-
essed 2020
3. Eu general data protection regulation. https://eugdpr.org/. Accessed 2020
4. Isaac, M., Frenkel, S.: Facebook security breach exposes accounts of 50
million users. https://www.nytimes.com/2018/09/28/technology/facebook-hack-
data-breach.html. Accessed 2020
5. Openssl - cryptography and SSL/TLS toolkit. https://www.openssl.org/. Accessed
2020
6. Perlroth, N.: All 3 billion yahoo accounts were affected by 2013 attack. https://
www.nytimes.com/2017/10/03/technology/yahoo-hack-3-billion-users.html. Acc-
essed 2020
7. Roman, J.: Ebay breach: 145 million users notified. https://www.bankinfosecurity.
com/ebay-a-6858. Accessed 2020
8. Agrawal, R., Kiernan, J., Srikant, R., Xu, Y.: Order preserving encryption for
numeric data. In: Proceedings of the 2004 ACM SIGMOD International Conference
on Management of Data, pp. 563–574. ACM (2004)
9. ARM, A.: Security technology building a secure system using trustzone technology
(white paper). ARM Limited (2009)
10. Armbrust, M., et al.: Spark SQL: relational data processing in spark. In: Proceed-
ings of the 2015 ACM SIGMOD International Conference on Management of Data,
pp. 1383–1394. ACM (2015)
11. Bahmani, R., et al.: Secure multiparty computation from SGX. In: Kiayias, A.
(ed.) FC 2017. LNCS, vol. 10322, pp. 477–497. Springer, Cham (2017). https://
doi.org/10.1007/978-3-319-70972-7 27
On the Trade-Offs of Combining Multiple Secure Processing 19
12. Boldyreva, A., Chenette, N., Lee, Y., O’Neill, A.: Order-preserving symmetric
encryption. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 224–241.
Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01001-9 13
13. Boneh, D., Lewi, K., Raykova, M., Sahai, A., Zhandry, M., Zimmerman, J.: Seman-
tically secure order-revealing encryption: multi-input functional encryption with-
out obfuscation. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS,
vol. 9057, pp. 563–594. Springer, Heidelberg (2015). https://doi.org/10.1007/978-
3-662-46803-6 19
14. Chambers, B., Zaharia, M.: Spark: The Definitive Guide: Big Data Processing
Made Simple. O’Reilly Media Inc, Sebastopol (2018)
15. Costan, V., Devadas, S.: Intel SGX explained. IACR Cryptology ePrint Archive
2016(086), 1–118 (2016)
16. Durak, F.B., DuBuisson, T.M., Cash, D.: What else is revealed by order-revealing
encryption? In: Proceedings of the 2016 ACM SIGSAC Conference on Computer
and Communications Security, pp. 1155–1166. ACM (2016)
17. Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: a
multiple-precision binary floating-point library with correct rounding. ACM Trans.
Math. Softw. (TOMS) 33(2), 13 (2007)
18. Goldreich, O.: Secure multi-party computation. Manuscript. Preliminary version
78 (1998)
19. Hadoop, A.: Apache hadoop. http://hadoop.apache.org. Accessed 2020
20. Katz, J., Lindell, Y.: Introduction to Modern Cryptography. CRC Press, Boca
Raton (2014)
21. Kocberber, O., Grot, B., Picorel, J., Falsafi, B., Lim, K., Ranganathan, P.: Meet the
walkers: accelerating index traversals for in-memory databases. In: Proceedings of
the 46th Annual IEEE/ACM International Symposium on Microarchitecture, pp.
468–479. ACM (2013)
22. Macedo, R., et al.: A practical framework for privacy-preserving NoSQL databases.
In: 2017 IEEE 36th Symposium on Reliable Distributed Systems (SRDS), pp. 11–
20. IEEE (2017)
23. Nambiar, R.O., Poess, M.: The making of TPC-DS. In: Proceedings of the 32nd
International Conference on Very Large Data Bases, pp. 1049–1058. VLDB Endow-
ment (2006)
24. Paillier, P.: Public-key cryptosystems based on composite degree residuosity
classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 223–238.
Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48910-X 16
25. Papadimitriou, A., et al.: Big data analytics over encrypted datasets with seabed.
In: 12th {USENIX} Symposium on Operating Systems Design and Implementation
({OSDI} 16), pp. 587–602 (2016)
26. Parquet, A.: Apache parquet. Accessed 2020
27. Pass, R., Shi, E., Tramèr, F.: Formal abstractions for attested execution secure
processors. In: Coron, J.-S., Nielsen, J.B. (eds.) EUROCRYPT 2017. LNCS, vol.
10210, pp. 260–289. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-
56620-7 10
28. Poess, M., Nambiar, R.O., Walrath, D.: Why you should run TPC-DS: a workload
analysis. In: Proceedings of the 33rd International Conference on Very Large Data
Bases, pp. 1138–1149. VLDB Endowment (2007)
29. Popa, R.A., Redfield, C., Zeldovich, N., Balakrishnan, H.: CryptDB: protecting
confidentiality with encrypted query processing. In: Proceedings of the Twenty-
Third ACM Symposium on Operating Systems Principles, pp. 85–100. ACM (2011)
Another Random Scribd Document
with Unrelated Content
Hän teki vähäisen, torjuvan eleen. »Tunkeutuuko helvetti —
paratiisiin, mon ami?» virkkoi hän lempeän leikillisesti.
»Kaksikymmentä vuotta on sinulla ollut taivas maan päällä, sinä
onnellinen pahus, kuten minä saatan todistaa. Pitääkö sinun vielä
muistella? Jos hän kerran voi unohtaa, miksi et sitten sinä?»
Se oli varoitus, jota hänen ei sopinut syrjäyttää. Jos hänen oli mieli
tietää se, mitä hän tahtoi tietää, ei sitä saanut siirtää enää
tuonnemmaksi.
»Sillä ei voi olla paljoakaan väliä mon cher, ja minä tahdon itse
nähdä hänet — kuulla totuuden hänen huuliltansa, ennenkuin
kuolen», intti hän entistä lujemmin. »Lähetä noutamaan häntä,
Ahmed! Se tekee poistumiseni helpommaksi.»
Hän loi yhden ainoan vilkaisun mieheen, jota oli ampunut, yhden
ainoan väijyvän, nopean silmäyksen avaraan, ylellisesti sisustettuun
huoneeseen ja kääntyi sitten uhmailevasti tuolilla istuvaan
äänettömään mieheen päin.
Mutta kuka?
»Se riittää, että hän kuoli, sinä katala peto», jyräytti hän. »Entä
hänen lapsensa?»
*****
Eipä silti, että hän olisi lakannut rakastamasta. Hän rakasti tyttöä
vieläkin ja tiesi, että saattoi kulua kauan, ennenkuin hänen
ensimmäisen rakkautensa muisto haalistuisi. Mutta halun raju liekki
oli palanut sammuksiin nopeasti, kuten se oli leimahtanutkin, ja
jälelle jäänyt rakkaus oli samaa säälivää hellyyttä, jota hän oli
aikaisemmin tyttöä kohtaan tuntenut.
Olipa hän joutunut pitkälle, mietti hän, jos hän saattoi järkeillä
kohtalosta yhtä rauhallisesti kuin kuka syntyperäinen arabialainen
hyvänsä! Arabialainen — ei. Mutta olihan sittenkin tämä maa hänen
synnyinmaansa, ja tänä iltana hän oli ensi kerran kuullut sen maan
kutsun. Tänä iltana hän ensi kerran tunsi sen oudon kauneuden
hienon lumousvoiman kiehtovan aistejansa. Enää hänen ei
kannattanut inttää sitä vastaan. Se kiehtoi, se lumosi. Ja äkkiä hän
taukosi rimpuilemasta vastaan. Perityt vaistot olivat vihdoin
heränneet ja murtaneet hänen koko vastustusvoimansa. Aavikko oli
vaatinut hänet omaksensa. Ja tyytyväisesti alistuen hän salli
aatoksiensa ajelehtia, antautuen ihailemaan itämaista taikayötä ja
uutta elämänlatua, joka hänelle avautui.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com