During Pi School of AI Session 12, we helped companies integrate AI into their processes with cutting-edge technologies such as retail sales forecasting, GPT for translation, and the automatic evaluation of CRM chatbots. We also contributed to a better understanding of wildfire forecasting by supporting the ESA Φ-lab, National Observatory of Athens, Harokopio University of Athens, and Max Planck Institute of Biogeochemistry, sponsoring the challenge: Advancing Wildfire Forecasting using Explainable AI.
Wildfire events have been increasing globally due to climate change, which makes it essential to improve our ability to anticipate fire danger and understand the driving mechanisms on a global scale. The SeasFire project, funded by the European Space Agency (ESA), aims to address this challenge by enhancing global fire forecasting capabilities using Earth Observation data.
The National Observatory of Athens (NOA) collected and analyzed seasonal fire driver data, such as climate, vegetation, oceanic indices, and population density, along with burned areas data from 2001-2021. Initial studies have shown that Deep Learning can potentially improve short-term regional and long-term global wildfire forecasting. However, to better understand which climate variables are more critical for forecasting wildfire events in different lead times and ecoregions, Explainable AI (XAI) techniques need to be leveraged.
The outcome of this challenge will improve our ability to predict wildfire events, enabling more effective responses and management of fire risk globally.
Johanna Strebl and Giovanni Paolini.
Cristiano De Nobili.
Recent advancements in Natural Language Processing (NLP) have significantly improved the ability of chatbots to interact with humans in a natural and engaging way. As chatbots become more sophisticated, they are increasingly being integrated into various systems to provide personalized recommendations or assistance. However, there is currently no standard way to evaluate the effectiveness or clarity of a chatbot. Human evaluators are limited in their ability to test the chatbot's behaviour on specific topics of interest, leaving the field in need of more advanced evaluation methods.
The goal of this Challenge is to develop and test evaluation methods to benchmark chatbots, utilizing GPT-like advanced language models to imitate human evaluators. This project aims to improve chatbots' effectiveness and reliability as tools for businesses to engage with their customers.
The output will also include a two-hour workshop on Prompt Engineering, which covers both established and emerging techniques. The results of this Challenge have the potential to transform the way businesses interact with their customers and provide personalized experiences.
Neeraj Rajpurohit, Shubhanshu Saxena and Aadarsh Gupta.
Cristiano De Nobili.
High-quality translations are essential for businesses to communicate effectively across borders and cultures. However, achieving consistency in translations can be challenging, especially when dealing with domain-specific terminology and requirements. While Neural Machine Translation (NMT) models can help achieve consistency, fine-tuning them on domain-specific data can be expensive and time-consuming.
The aim of this Challenge is to help improve the consistency of automatic translations in a real-time manner using Large Language Models (LLMs). By leveraging the in-context learning capabilities of LLMs, accurate and consistent translations can be obtained.
The result will provide businesses with high-quality, consistent translations that accurately reflect their intended message. With real-time translations that are both accurate and consistent, companies can communicate more effectively with their global partners and customers.
Mahdi Molaei, Marco Da Mommio and Gianfranco Romani.
Francesco Cariaggi
The fair trade market traditionally relies on direct commercial relationships with retailers to distribute its products. However, this process could be more efficient, leading to stockouts and unsold inventory. To address this problem, Pi School fellows have partnered with a fair trade startup to develop an inventory optimisation system for retailers.This system aims to automate the restocking of fair trade products and optimise product replenishment to maximise sales and minimise unsold units.
By leveraging advanced data analysis techniques such as time series analysis and cutting-edge machine learning algorithms, the system will generate accurate restocking orders that ensure the right products are always in stock at the right time. This will improve retailers' bottom line and enhance the overall customer experience by reducing out-of-stock situations and minimising wasted inventory. Ultimately, this partnership aims to transform how fair trade products are sold and distributed, benefiting retailers, customers, and the industry.
Product management icons created by Freepik - Flaticon
Lorenzo Tarricone, Francesco Vitale.
Marcello Politi
Data Scientist at Assesso
Lucas is an accomplished AI analyst and data scientist with over 10 years of experience driving innovation and delivering results for clients. With extensive expertise in machine learning, statistical analysis, and data visualization, as well as experience in Business Intelligence (BI), Lucas has successfully completed projects for international organizations and private companies in different industries. He is also a Kaggle competitions expert. Lucas is known for effectively communicating complex technical concepts to non-technical stakeholders, and his strategic thinking has helped his clients achieve their business goals. His passion for data-driven insights and his commitment to delivering high-quality solutions has earned him a reputation as a trusted expert in the industry.
PhD Researcher - Machine Translation PhD Researcher - Machine Translation
Yasmin is a final-year PhD candidate in Computer Science and Machine Translation. She has gathered hands-on experience in the language technology industry for several years.