Photo by Liam Charmer on Unsplash

Having read the Network in Network (NiN) paper by Lin et al a while ago, I came across a peculiar convolution operation that allowed cascading across channel parameters, enabling learning of complex interactions by pooling cross-channel information. …


The high computational cost of inferring from a complicated and often intractable, ‘true posterior distribution’ has always been a stumbling block in the Bayesian framework. However (and thankfully), there are certain inference techniques that are able to achieve a reasonable approximation of this intractable posterior with something… tractable.


Photo by Thomas Martinsen on Unsplash

This year celebrates the 50th anniversary of the paper by Rudolf E. Kálmán that conferred upon the world, the remarkable idea of a Kalman Filter. Appreciation for the beauty (and simplicity) of this filtering technique often gets lost in technical, verbose definitions like the one found on Wikipedia:

In statistics…


Picture by Franki Chamaki on Unsplash

If a picture says a thousand words, your data probably says a lot more! You just need to listen.

A few months ago, working on a coursework assignment, we were posed with a problem: to develop an analytical model capable of detecting malicious or fraudulent transactions from credit card data-logs


Bayesian Inference is based on generative thinking

In the Machine Learning domain, we often find ourselves in the pursuit of inferring a functional relationship between the attribute variable(s) (i.e. features or simply, input data: x) and its associated response (i.e. the target variable: y). Being able to learn this relationship allows one to build a model that…


Photo by Ryan Stone on Unsplash

Let's start with a small exercise. But first I have to ask you to put on your fitness tracker; You do own one right? Oh.. you thought the exercise was mental? Disastrous. The point is, if you’ve been even a tiny bit observant with the fitness device that sits on…


Photo by Volkan Olmez on Unsplash

When we think of an image, it's almost natural to think of a two-dimensional (2-D) representation right? Well, in reading the previous sentence carefully, ‘almost’ is the keyword.

In a recent paper, Scaling down deep learning, Sam Greydanus introduced a rather cool 1-D image dataset called the MNIST 1-D labelling…


Photo by Matthew Osborn on Unsplash

Back in high-school, we’ve all had that phase of derision in response to frequent reprimands issued by our teachers to scrupulously demonstrate the steps leading to a solution, especially while working out questions in an examination. “It promotes readability and calls attention to your intuition!”, they said. I vividly remember…

Anwesh Marwade

A final year student pursuing Masters in Data Science and Computer Vision. A motivated learner with a liking towards Sports Analytics. anweshcr7.github.io

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store