December 6, 2018Exploratory Data Analysis,Data Analysis,Open Source,IssueHunt,React,MySQL,NodeJS,Web ScrapingData Science
To keep up with advances with technology, one activity that software engineers often do is contribute to Open Source. I'll be restricting this to only contributing to other existing projects, not your own projects.
However, there are some obstacles when contributing:
Some would see not contributing to Open Source as selfish. After all, you get to use free tools and you should be grateful. I honestly don't like this line of thinking. Not everyone wants to spend their entire time programming. Some projects have contributing policies that are a hassle to deal with. Some would like to do a side hustle and earn extra money.
Fortunately, there a couple websites that focus on earning money while contributing to Open Source. I ran across a few different sites:
For this post, I'll be mainly focusing on IssueHunt.
by Joseph Woolf
September 19, 2018Natural Language ProcessingMachine Learning,Deep Learning,Natural Language Processing
Most of the models in machine learning requires working with numbers. After all, much of the machine learning algorithms we've seen are derived from statistics (Linear Regression, Logistic Regression, Naive Bayes, etc.). Additionally, machines can understand and work with numbers a lot easier than us human.
However, machines just process the numbers and execute algorithms. They don't interpret the numbers returned. They don't understand the context of the data. They especially don't understand human intricacies and can easily be taken advantage by rouge players.
So then, is it actually possible for computers to understand humans? Can we ever have conversations with computers? In a sense, we already can! This is thanks to a branch of AI called Natural Language Processing.
September 12, 2018machine learning,overfitting,underfitting,bias,varianceMachine Learning
Let's imagine you're doing research on an ideal rental property. You gather your data, open up your favorite programming environment and you get to work on perform Exploratory Data Analysis (EDA). During your EDA, you find some dirty data and clean it to train on. You decide on a model, separate the data into training, validation, and testing, and train your model on the cleaned data. Upon evaluating your model using some validation and test data, you notice that your validation error is very high as well as your test error.
Now suppose you pick a different model or add additional features. Now your validation error is much lower. Great! However, upon using your testing data, you notice that the error is still high. What just happened?
September 5, 2018Deep Learning,machine learning,neural networkMachine Learning,Deep Learning
I admit, I'm late to the whole Neural Network party. With all of the major news covering AI that use neural network as part of their implementation, you'd have to be living under a rock to not know about them. While it's true that they can provide more flexible models compared to the other machine learning algorithms, they can be challenging to work with.
August 20, 2018OpenCV,Java,MavenOpenCV
When you learn about OpenCV, you'll often hit up on OpenCV for Python or C++, but not Java. I can understand that OpenCV is a glorified NumPy extension for Python and OpenCV C++ is very fast. However, it's possible that you have a legit need to use Java instead of Python or C++.
In a professional setting, Java users are likely to use Apache Maven to allow everyone to get the same version of each software without causing build and run issues. Sure, you can always install the library and setup the CLASSPATH to point at OpenCV, but I find it better to use Maven to handle the libraries. Just note that there is no official Maven repository for OpenCV at the time of writing, but there been others that have uploaded alternative repositories.
August 8, 2018Uncategorized
Sports. Sports. Sports.
Some people love watching them. Others love playing them. The US love their football while those in Latin America love their soccer. As much as we fight and bicker about which sport is the best or that our favorite team is the best, many people love sports as a pastime and follow their teams religiously.
While I'm not a sports fan, I did come across an interesting dataset from data.world that determine what was the toughest sport to pick up. Even though this dataset is framed in an objective manner, I would like to ask a different question: Based on the sports data and a person's abilities, what sport would be optimal for them?
June 12, 2018Data Science,Machine Learning,Apache Spark
For those wanting to work with Big Data, it isn't enough to simply know a programming language and a small scale library. Once your data reaches many gigabytes, if not terabytes, in size, working with data becomes cumbersome. Your computer can only run so fast and store only so much. At this point, you would look into what kind of tooling is used for massive amounts of data. One of the tools that you would consider is called Apache Spark. In this post, we'll look at what is Spark, what can we do with Spark, and why to use Spark.
May 11, 2018Data Cleaning
One of the recent datasets that I picked up was a Kaggle dataset called "The Interview Attendance Problem". This dataset focuses on job candidates in India attending interviews for several different companies across a few different industries. The objective is to determine whether a job candidate will be likely to show up or not.
March 18, 2018dataset,Kaggle,EDA,Exploratory Data AnalysisDatasets,Data Science
With Data Science being a very popular field that people want to get into, it's no surprise that the amount of contributions to Kaggle dramatically increased. I recently stumbled across a dataset that gathered the most popular kernels and decided to do some exploratory data analysis on the dataset.
December 28, 2017dataset,Kaggle,classificationDatasets,Machine Learning,Deep Learning
One of the hottest tech disciplines in 2017 in the tech industry was Deep Learning. Due to Deep Learning, many startups placed AI emphasis and many frameworks have been developed to make implementing these algorithms easier. Google's DeepMind was even able to create AlphaGo Zero that didn't rely on data to master the game of Go. However, the analysis is much more basic than anything that was recently developed. In fact, the dataset is the popular MNIST database dataset. In other words, the dataset consists of hand written digits to test out computer vision.