First Consulting Contract

This week, I closed on the first-ever consulting contract for Tarada Consulting, LLC. While I can’t currently discuss the client, I can say that I will be working on an interesting machine learning problem as a consultant.

I’m hoping that Tarada Consulting will continue to win consulting contracts and allow me to gain more experience in machine learning. There are also some potential opportunities for collaboration with researchers in different fields for application of machine learning to areas where it’s never been applied before.

Once this contract is complete, if I receive permission to publish my client’s name, then I’ll put up an update at that time.

Advertisements

Research Update and Consulting

I spent the past few weeks working very hard to prepare my lab for important upcoming deadlines. After the long hours were over, I decided to leave my current position. However, I will be reviewing all papers I co-authored as they continue to move through the publishing pipeline. I am co-author on a paper that the lab is set to submit to ICRA 2018, which will be on multi-task learning of behavioral modes in autonomous driving. Another paper I am co-author on will be submitted to ICLR 2018, whose deadline is a bit further out. That paper will be on my work on autonomous driving with SqueezeNet and LSTMs.

In the mean time, I have formed a consulting firm, Tarada Consulting, LLC, through which I will be doing deep learning consulting. I have a number of projects in the works, though I may not be able to discuss some of them here due to their confidentiality. I will be sure to detail any projects I do that are not encumbered by NDAs or other confidentiality requirements.

Switching Dataset Formats

During the last week at Karl’s autonomous RC car lab, we made significant progress in fixing the slow training speed¬†and memory leaks. Essentially, there were two main problems. One problem was that the internal structure of the dataset within our HDF5 files was much too complicated. The second was that our autonomous driving dataset is simply too large and complex to do any substantial on-the-fly data processing during training.

Improving HDF5 Layout

To address the first problem, Karl created a new, somewhat simplified layout inside the HDF5 files containing the dataset, which made random access significantly more efficient. Previously, random access within the dataset required multiple dictionary lookups, which are significantly slower than indexing into an array. All of this was flattened out into a single, large, multi-dimensional array that contained all of the data indexed with integers.

Pre-processing

The second issue had only one answer: pre-processing. We finally created pre-processing code that worked. This required a pre-processing pipeline made up of multiple stages. I won’t go through all of them here, but I’ll discuss the crucial part.

We made a complete pass through the dataset to make it fully ready for training. In the old system, we had a couple hundred HDF5 files called “runs.” Each run is a set of data collected in an uninterrupted timeline. However, the entire timeline wasn’t necessarily good for training. When the car was picked up or not moving, data was still recorded in the run, but it would not have been useful to train on this. Instead, we had a system of converting runs into “segments” on the fly. Each segment is a set of data collected in an uninterrupted timeline, and consisting entirely of usable data. In this new system, when we passed through the data during pre-processing, we broke up each run HDF5 file into several segment HDF5 files, each containing a continuous stream of trainable data. Any data that wasn’t trainable was discarded. We ended up with clean, compact files that each contained only continuous, usable data.

ONNX for Neural Networks

Just in the last few days, I’ve been seeing a lot about a new open source format for neural network models called Open Neural Network Exchange, or ONNX. I haven’t yet gotten a chance to try it out myself, but it looks very promising.

ONNX appears to be a way to save neural network models from multiple deep learning frameworks in a universal format that is cross-compatible. If this turns out to really work, then it would be a major advancement, as currently models made in one deep learning framework are very hard to translate to another.

I can think of several cases in the projects I’ve already worked on where having a format like ONNX would have been immensely helpful. The way I’d use it, it would allow me to take advantage of the pros of each deep learning framework I work in, while avoiding the cons by loading up my weights file in a different framework when I encounter a framework-specific issue.

I hope that ONNX ends up being integrated into every major deep learning framework. Their GitHub page claims that Caffe2, PyTorch, and Cognitive Toolkit will all support ONNX. However, in order for it to take off, I’d expect that TensorFlow/Keras support would be absolutely crucial. This will be an interesting project to watch. When I have some time to try out ONNX, I may test to see if I can transfer some simple networks between PyTorch and Caffe2.