Evaluation is the process of understanding the reliability of any AI model, based on outputs by feeding the test dataset into the model and comparing it with actual answers.
There can be different Evaluation techniques, depending on the type and purpose of the model.
Remember that It’s not recommended to use the data we used to build the model to evaluate it.
This is because our model will simply remember the whole training set, and will therefore always predict the correct label for any point in the training set. This is known as overfitting.
Evaluation is the Last Stage of the AI Project Cycle, All the Stages are Listed Below:-
Problem Scoping - Understanding the problem
Data Acquisition - Collecting accurate and reliable data