Closed-loop Error Correction Learning Accelerates Experimental Discovery of Thermoelectric Materials.

The exploration of thermoelectric materials is challenging considering the large materials space, combined with added exponential degrees of freedom coming from doping and the diversity of synthetic pathways. Here we seek to incorporate historical data and update it using experimental feedback by em...

Full description

Bibliographic Details
Published in:Advanced Materials
Main Authors: Choubisa, Hitarth, Haque, Mohammed, Zhu, Tong, Zeng, Lewei, Vafaie, Maral, Baran, Derya, Sargent, E.
Other Authors: Material Science and Engineering Program, Physical Science and Engineering (PSE) Division, KAUST Solar Center (KSC), Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada.
Format: Article in Journal/Newspaper
Language:unknown
Published: Wiley 2023
Subjects:
Online Access:http://hdl.handle.net/10754/689874
https://doi.org/10.1002/adma.202302575
Description
Summary:The exploration of thermoelectric materials is challenging considering the large materials space, combined with added exponential degrees of freedom coming from doping and the diversity of synthetic pathways. Here we seek to incorporate historical data and update it using experimental feedback by employing error-correction learning (ECL). We thus learn from prior datasets and then adapt the model to differences in synthesis and characterization that are otherwise difficult to parameterize. We then apply this strategy to discovering thermoelectric materials, where we prioritize synthesis at temperatures < 300○C. We document a previously-unexplored chemical family of thermoelectric materials, PbSe:SnSb, finding that the best candidate in this chemical family, 2 wt% SnSb doped PbSe, exhibits a power factor more than 2x that of PbSe. The investigations herein reveal that a closed-loop experimentation strategy reduces the required number of experiments to find an optimized material by a factor as high as 3x compared to high-throughput searches powered by state-of-art machine learning (ML) models. We also observe that this improvement is dependent on the accuracy of the ML model in a manner that exhibits diminishing returns: once a certain accuracy is reached, factors that are instead associated with experimental pathways begin to dominate trends. M.A.H. and H.C. contributed equally to this work. This publication is supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2018-3737. TOC was created by Ana Bigio, scientific illustrator at KAUST. ML models were trained using QUEST computing clusters located at Northwestern University. DFT calculations were performed both at the QUEST computing cluster located at Northwestern University and the Narval computing cluster which is part of Compute Canada and made accessible through University of Toronto. We thank Lydia Li for help in designing Figure 1.