Custom loss functions

How do we train ML models to allocate resources in communication systems?

In the Spring of 2023, we submitted a paper with the following abstract to IEEE Transactions on Machine Learning in Communications and Networking. This was its abstract:

Machine Learning (ML) is a popular tool that will be pivotal in enabling 6G and beyond communications. This paper focuses on applying ML solutions to address outage probability issues commonly encountered in these systems. In particular, we consider a single-user multi-resource greedy allocation strategy, where an ML binary classification predictor assists in seizing an adequate resource. With no access to future channel state information, this predictor foresees each resource’s likely future outage status. When the predictor encounters a resource it believes will be satisfactory, it allocates it to the user. Critically, the goal of the predictor is to ensure that a user avoids an unsatisfactory resource since this is likely to cause an outage. Our main result establishes exact and asymptotic expressions for this system’s outage probability. With this, we formulate a theoretically optimal, differentiable loss function to train our predictor. We then compare predictors trained using this and traditional loss functions; namely, binary cross-entropy (BCE), mean squared error (MSE), and mean absolute error (MAE). Predictors trained using our novel loss function provide superior outage probability in all scenarios. Our loss function sometimes outperforms predictors trained with the BCE, MAE, and MSE loss functions by multiple orders of magnitude.

David Simmons
Engineer and researcher

My main interests are mathematics, machine learning, and communications theory