Learning is a lifelong process. But you must know what, where, and how to learn? What skills to develop? What skills will help you boost your career? If not, you are at the right place! Our tutorial section at CoderzColumn is dedicated to providing you with all the practical lessons. It will give you the experience to learn Python for different purposes and code on your own. Our tutorials cover:
For an in-depth understanding of the above concepts, check out the sections below.
Tutorial guides on how to use GloVe word embeddings with Haiku (JAX) networks to solve text classification tasks. It explains various ways of handling embeddings to get better results.
The tutorial explains how to use pre-trained MXNet models for solving object detection tasks. MXNet has a helper library GluonCV which maintains pre-trained models for computer vision tasks. We have covered how to load a model and use it on random images downloaded from the internet to detect objects in them.
The tutorial guides on how to use pre-trained PyTorch models/networks for the object detection tasks. PyTorch provides pre-trained models through torchvision module. We have explained how you can load a model and run it on random images from the internet to detect objects in them.
The tutorial guides to creating neural networks using Python library Haiku that uses word embeddings approach for solving text classification tasks. Haiku is a high-level deep learning library built on top of JAX.
The explains how to create RNNs (consisting of LSTM layers) to solve time series regression tasks using MXNet.The dataset used in the tutorial is a multivariate time-series dataset.
The tutorial covers how we can solve text classification tasks using Haiku neural networks. It uses a basic word frequency (bag of words) approach to encode text data before giving it to the neural network.
The tutorial explains how we can create RNNs consisting of LSTM layers for solving time-series regression tasks. The LSTM Networks (RNNs) are preferred for solving tasks involving time-series data as they can better capture order/sequence.
The tutorial provides a guide on creating RNNs consisting of LSTM layers for solving text generation tasks. It uses a character-based approach to generate new text. The text data is encoded using the character embeddings approach.
The tutorial covers how we can create Recurrent Neural Networks (RNNs) consisting of LSTM Layers for text generation tasks. It uses a character-based approach (works on characters instead of words/n-grams) to generate new text. The text is encoded using the bag of words approach before giving it to LSTM layers for processing.
The tutorial explains how to create Recurrent Neural Networks (RNNs) consisting of LSTM Layers to solve time-series regression tasks. LSTM networks are quite good at tasks involving time-series data.
Parallel Computing is a type of computation where tasks are assigned to individual processes for completion. These processes can be running on a single computer or cluster of computers. Parallel Computing makes multi-tasking super fast.
Python provides different libraries (joblib, dask, ipyparallel, etc) for performing parallel computing.
Concurrent computing is a type of computing where multiple tasks are executed concurrently. Concurrent programming is a type of programming where we divide a big task into small tasks and execute these tasks in parallel. These tasks can be executed in parallel using threads or processes.
Python provides various libraries (threading, multiprocessing, concurrent.futures, asyncio, etc) to create concurrent code.
Once our Machine Learning model is trained, we need some way to evaluate its performance. We need to know whether our model has generalized or not.
For this, various metrics (confusion matrix, ROC AUC curve, precision-recall curve, silhouette Analysis, elbow method, etc) are designed over time. These metrics help us understand the performance of our models trained on various tasks like classification, regression, clustering, etc.
Python has various libraries (scikit-learn, scikit-plot, yellowbrick, interpret-ml, interpret-text, etc) to calculate and visualize these metrics.
After training ML Model, we generally evaluate the performance of model by calculating and visualizing various ML Metrics (confusion matrix, ROC AUC curve, precision-recall curve, silhouette Analysis, elbow method, etc).
These metrics are normally a good starting point. But in many situations, they don’t give a 100% picture of model performance. E.g., A simple cat vs dog image classifier can be using background pixels to classify images instead of actual object (cat or dog) pixels.
In these situations, our ML metrics will give good results. But we should always be a little skeptical of model performance.
We can dive further deep and try to understand how our model is performing on an individual example by interpreting results. Various algorithms have been developed over time to interpret predictions of ML models and many Python libraries (lime, eli5, treeinterpreter, shap, etc) provide their implementation.
Data Visualization is a field of graphical representation of information / data. It is one of the most efficient ways of communicating information with users as humans are quite good at capturing patterns in data.
Python has a bunch of libraries that can help us create data visualizations. Some of these libraries (matplotlib, seaborn, plotnine, etc) generate static charts whereas others (bokeh, plotly, bqplot, altair, holoviews, cufflinks, hvplot, etc) generate interactive charts. Majority of basic visualizations like bar charts, line charts, scatter plots, histograms, box plots, pie charts, etc are supported by all libraries. Many libraries also support advanced visualization, widgets, and dashboards.
Basic Data Visualizations like bar charts, line charts, scatter plots, histograms, box plots, pie charts, etc are quite good at representing information and exploring relationships between data variables.
But sometimes these visualizations are not enough and we need to analyze data from different perspectives. For this purpose, many advanced visualizations are developed over time like Sankey diagrams, candlestick charts, network charts, chord diagrams, sunburst charts, radar charts, parallel coordinates charts, etc. Python has many data visualization libraries that let us create such advanced data visualizations.
Deep learning is a field in Machine Learning that uses deep neural networks to solve tasks. The neural networks with generally more than one hidden layer are referred to as deep neural networks.
Many real-world tasks like object detection, image classification, image segmentation, etc can not be solved with simple machine learning models (decision trees, random forest, logistic regression, etc). Research has shown that neural networks with many layers are quite good at solving these kinds of tasks involving unstructured data (Image, text, audio, video, etc). Deep neural networks nowadays can have different kinds of layers like convolution, recurrent, etc apart from dense layers.
Python has many famous deep learning libraries (PyTorch, Keras, JAX, Flax, MXNet, Tensorflow, Sonnet, Haiku, PyTorch Lightning, Scikeras, Skorch, etc) that let us create deep neural networks to solve complicated tasks.
Image classification is a sub-field under computer vision and image processing that identifies an object present in an image and assigns a label to an image based on it. Image classification generally works on an image with a single object present in it.
Over the years, many deep neural networks (VGG, ResNet, AlexNet, MobileNet, etc) were developed that solved image classification task with quite a high accuracy. Due to the high accuracy of these algorithms, many Python deep learning libraries started providing these neural networks. We can simply load these networks with weights and make predictions using them.
Python libraries PyTorch and MXNet have helper modules named 'torchvision' and 'gluoncv’ respectively that provide an implementation of image classification networks.