Is F1 the appropriate criterion to use? What about F2, F3,…, F beta?
It is very common to use the F1 measure for binary classification. This is known as the Harmonic Mean. However, a more generic F-beta score criterion might better evaluate model performance. So, what about F2, F3, and F-beta? 
Value-based Methods in Deep Reinforcement Learning
Deep Reinforcement learning has been a rising field in the last few years. A good approach to start with is the value-based method, where the state (or state-action) values are learned. In this post, a comprehensive review is provided where we focus on Q-learning and its extensions
Tuning Q matrix for CV and CA models in Kalman Filter
Kalman Filter (KF) is widely used for vehicle navigation tasks, and in particular for vehicle trajectory smoothing. One of the problems associated while applying the KF for navigation tasks is the modeling of the vehicle trajectory.

Exploring The Latest Trends of Random Forest

The random forest model is considered one of the promising ML ensemble models that recently became highly popular. In this post, we review the last trends of the random forest.

Kalman Filter Celebrates 60 Years — An Intro.

The Kalman filter is one of the most influential ideas used in Engineering, Economics, and Computer Science for real-time applications. This year we mention 60 years for the novel publication.

AI-Based Worldwide-Trends Due to COVID-19

COVID-19 has affected the worldwide economy, politics, education, tourism, and actually EVERYTHING. Many academic papers address trends prediction in various fields due to COVID-19, with the power of Artificial Intelligence.

Deep Learning in Geometry: Arclength Learning

A fundamental problem in geometry was solved using a Deep Neural Network (DNN). We learned a geometric property from examples in the supervised learning approach. As the simplest geometric object is a curve, we focused on learning the length of planar curves. For this reason, the fundamental length axioms were reconstructed and the ArcLengthNet was established.

The Exploding and Vanishing Gradients Problem in Time Series

In this post, we deal with exploding and Vanishing Gradient in Time Series and in particular in Recurrent Neural Network (RNN) by Truncated BackPropagation Through Time and Gradient Clipping.

Penalizing the Discount Factor in Reinforcement Learning

The reinforcement learning field is used in many robotics problems and has a unique mechanism, where rewards should be accumulated through actions. But, what about the time between these actions?

Temporal Convolutional Networks, The Next Revolution for Time-Series

This post reviews the latest innovations of TCN based solutions. We first present a case study of motion detection and briefly review the TCN architecture and its advantages over conventional approaches such as Convolutional Neural Networks (CNN) and Recurrent Neural Network (RNN). Then, we introduce several novels using TCN, including improving traffic prediction, sound event localization & detection, and probabilistic forecasting.

Deep Learning for Inertial Navigation

A short review of cutting-edge deep learning-based solutions for inertial navigation.

Online Deep Learning (ODL) and Hedge Back-Propagation

Online learning is an ML method in which data is available in sequential order, and we use it in order to predict future data at each time step. Online Deep Learning is very challenging as it cannot use back-propagation.