EIP2 : External Internship Program is a handson machine learning course being taught by The Inkers .
The first session was pretty spot on, presented by Rohan Shravan, major takeaway being that ML and CV is progressing at a rapid pace and many concepts have become absolute due to being replaced by efficient algorithms.
 Interesting new items heard during the Session
 Assignment Questions for Session 1
 Assignment answers
Interesting new items heard during the Session
 The Bored Cat Experiment
 Speech to Image  this was research project 5 years ago.Can be proud of my idea now :P.
 Listening to convolution as feature vectors : Video.
 gitxiv.com  Papers which also publish source code for validation.
 fast.ai  For gathering latest information.
 distill.pub  opensource ML papers : apart from arxiv.org.
 The logic behind top5 , top1 errors as accuracy in ML papers.
Assignment Questions for Session 1
Write articles on the topics mentioned below between 50100 words in a markdown format .
 Convolution
 Filters/Kernels
 Epochs
 1x1 convolution
 3x3 convolution
 Feature Maps
 Feature Engineering (older computer vision concept)
Bonus points if you write on any of these:
 Activation Function
 How to create an account on GitHub and upload a sample project
 Receptive Field.
 10 examples of use of MathJax in Markdown
Assignment answers
Name : Sachin S Shetty
Batch : 4
 Convolution
It is the basic multiplication of matrices , ie. image matrix of size n x m into the kernel/filter matrix of size p X q . The result would be a convoluted matrix which would have either 1. increase the dimension of the image, for expanding the feature vectors 2. Decrease the dimension for analysis.

Filters/Kernels
The kernel/filters are one part of the matrix multiplication for convolution . These matrices are generated in random based for required distribution, ie. Gaussian. The randomness allows the variety of images/features to be clubbed together by backtracking.

Epochs
Epoch is the counter for running the entire convolution & backtracking operation with a reduced set of images. All the images from the training set are not taken completely in a single run, the images are taken in batches to train and improve the accuracy. In each epoch, multiple operations take place . The accuracy is calculated at end of each epoch and then experiment can either continue or else stop based on accuracy .

1x1 convolution
Reduces the dimension of the feature map after conolution operation .

3x3 convolution
Increase the dimensions of the feature maps after the convolution operation .

Feature Maps
Feature maps are the result of convolution operation, i.e the multiplication of feature matrix(images) with the filter/kernels.

Feature Engineering (older computer vision concept)
The earlier standard for solving computer vision problems before the advent of ML/AI, which in part was helped due to cloud computing was Feature Engineering.
Feature Engineering techniques like hough transform for detecting lines/circles, sobel filter are still prevalent for basic CV tasks.
These technique have the feature vectors/calculation hardcoded .ie. the values have been calculated and fixed to solve a particular problem/domain .
Bonus points if you write on any of these:

Activation Function
The activation functions helps to reduce the range of the matrix values/signal values. The prevailing standard function used in ML/AI is ReLu : Rectified Linear Unit, it rejects the negative values and passes 0 instead.
few other functions used earlier were sigmoid
 tanh

How to create an account on GitHub and upload a sample project. Link to have a personal website is sample_blog.

Receptive Field.

10 examples of use of MathJax in Markdown