In almost every area of software development, technology changes continuously. How do you stay on top of new techniques and development tools? In many cases, people will read blog posts or articles about the new technologies and assume their decisions or results are the final word. However, all people make mistakes, so why do we assume that everything we read is true in all situations. As with anything, it all depends on your situation.
So, how do you know if something will work in your environment? Let’s assume that you are working in a Java environment. Which MVC framework do you use? There are a ton of frameworks to choose from, so how to choose? The benefit of reading a lot of development blogs is that you can find out which frameworks are not generally well accepted. The problem is that there are still several popular frameworks like Struts, Spring and more. To find the one that works for your environment, you need to experiment. Try each of the frameworks on a small prototype to determine what works for your team.
The key to experimentation is to gather data. If you experiment, but you do not gather data, then you really on just working from your impressions that you can remember. One great example of experimentation and data collection can be found in Tim Ferris‘ book The 4-Hour Body. Tim goes outside the boundaries of typical medical and scientific research to experiment on himself. Even within the book, he recommends that you experiment with his own findings to determine what works well for you. Technology experimentation should really be no different. Think about the popular development blogs and how they describe their testing of frameworks. Would you always be using the framework in the same manner as presented in someone’s blog? Probably not.
If we further this point, in a typical development shop, NoSQL solutions have not been deployed or even tried yet. This is obviously different than the current startup environment. If you start reading blog posts regarding NoSQL solutions, you will find different use cases, conflicting results, and a host of confusing information. Even some blogs that have tried to compare some NoSQL solutions, but it really depends on how it is being used and the blog actually states that:
NoSQL data stores are typically geared towards a specific sweet spot, and make sacrifices in other areas in order to do that one thing well.
Given this information, the first thing you need to figure out is what is important to your environment. If you are looking for a datastore that will be used heavily for searching, that choice would likely be very different than a datastore being used heavily for writes. More often, you will have a combination of needs. So, you need to list your basic requirements for your datastore, even weighing how important certain features are if there is a disparity of importance.
Once you have the requirements you need to gather the data. So, there is the amount of time to configure and install the datastore. There is the development time required to interface with the datastore. A subjective metric that you should track is the complexity of the code to interface with the datastore. Then there are the basic performance metrics that you need to capture. One thing you should do when tracking performance of any datastore, including traditional RDBMS, is track the performance for all of your tests twice in one session. What I mean is start the datastore, run your performance tests, and then run them again immediately after finishing. This gives you two performance metrics, one is a “cold” performance test, which gives performance data for new queries and data manipulation. The second is the “warm” performance test which gives performance data for the same tests but with whatever automated optimization and caching the datastore uses. You should also run these tests in several different sessions, maybe around 10, in order to be able to calculate a reasonable average and standard deviation.
I am not recommending you do a full statistical analysis of the performance data as you are really just trying to ensure that you are picking the appropriate solution for your environment. Pure statistical data is not the only metric to track either. When dealing with new development tools, make sure that several members of the team are involved in developing the prototypes. You need to ensure that the members of your team include varying development experience as well. You are really trying to determine how well all skill levels can use these new tools. In some cases, if the prototype is “small enough”, you may even want each developer to create their own prototype based on a simple set of requirements. This gives you isolated experience as opposed to the team working together. Each developer will have their own opinions with different likes and dislikes. You can also get a good feeling from the team regarding how maintainable the code will be from each member of the team.
As you can see, there are many important things to do when choosing new technologies. You need to experiment with the technologies as a team. You need to collect data so that some of your decision-making is not entirely subjective. Even if the team likes working with a specific technology, the data will tell you whether it makes sense for your purposes. So, the next time you want to use a new technology, do not just read some blogs, experiment with it to make sure you are making the choice and not some random blogger (like me).
[…] two months ago, I wrote about experimenting with new technology. If you are not experimenting with technology, then any time you need to use new tools there will […]
LikeLike