It’s amazing how little things can turn a data science / mathematical modelling project into a full-fledged mess. There are easily avoidable traps, but unfortunately also easily forgotten: we love to pretend we “couldn’t do something that silly”.
Following the principles below has saved me (and others) a lot of time and spared me from unnecessary pain:
Always check your data
No one has ever been arrested for excess logging. If you don’t check the input for your models and the outputs of calculations at the intermediate steps, you are in for disappointing results.
There is a promise of a greener future when deep learning will release us from all the mess of pre-processing, and we can ingest raw data in any format and the intermediate layers will sort it out. Sorry, but here on real life we are still too far from that. Garbage in, garbage out.
Test your model on limit cases
This is somewhat related to the previous point, but not always. What happens to your model in the limit? Many scientists ask these kind of questions as a basic sanity check. The same should be done: does your model predict what the domain expert / common sense expects when you make one variable too large or too small? If not, then there is something wrong. Do you have a positive regression coefficient for monthly totals, but a negative for average totals? Time to revisit your model.
A common source of errors here is the scale: having two variables on wildly different scales often produces odd, counter-intuitive models.
A common error, specially in highly technical teams, is to over-engineer prematurely. There is a justified fear from that: if the data science experiments become too messy, then it will be too complicated for the engineers to implement them when it’s time to scale. But very rarely this justifies discarding the working Python prototype for writing everything from scratch in C.
Another common error is to try the “tool of the day” and build the solution in a poorly supported technology. There are thousands of poorly maintained and documented open source projects.
Don’t reinvent the wheel. Bet on tested technologies to iterate quickly and, ultimately, bring value to your final user, whether a consulting client, or someone else internally in your organization. Once the project is on its way and there’s trust on the final user side, there will be plenty of time to explore alternatives.