As machine learning (ML) and artificial intelligence (AI) software becomes increasingly more common in consumer-grade products, customers are bound to ask themselves “what’s a good machine learning-based hardware benchmark to look for when shopping for an upgrade or a new device?” Unfortunately, this question does not yet have a clear answer. Unlike the gaming industry, which has seemingly dozens of benchmark software options to allow both gamers and developers to find the best CPUs and GPUs for their particular use case, the ML and AI communities has no such resources available to them. While strong gaming-specific benchmarks may be an indicator of a good hardware choice for machine learning or artificial intelligence, this is not always the case. Thanks to the nature of complex mathematical processes utilized in tasks like deep learning it is not uncommon for the same program to run at different performance rates on different types of hardware. Due to this trend, a large scale, industry-specific ML/AI is needed to ensure that enthusiasts and developers alike have access to transparent information and competitive hardware choices.
Let’s take the gaming industry as an example of just how beneficial widespread hardware benchmarking can be for consumers. Applications like Geekbench, Basemark, and 3DMark allow gaming enthusiasts to get a strong understanding of their hardware’s capabilities in several key areas of performance, such as graphical frame rates or CPU speed. In turn, this information can assist consumers when they are making purchasing decisions, as data collected from these tests is often compiled into user-friendly websites like this, which allow users to see how different hardware combinations may affect their system’s performance. While an excellent benchmark ecosystem exists to assist PC gamers with their purchasing decisions, there is a distinct lack of such options for ML/AI enthusiasts. This industry-wide deficiency may not seem like a huge issue at this point in time, but this will likely chance in the near future. With the widespread release of user-friendly software packages such as Microsoft’s Windows ML and Apple’s Core ML set to increase machine learning’s accessibility for new enthusiasts and developers alike, it is imperative that those interested in pursuing machine learning or artificial intelligence have access to quality benchmark data in an effort to inform their purchasing decisions. If the ML/AI community is to grow as quickly as we wish, it is extremely important that those interested in working with this emerging technology are able to make well-informed purchasing decisions when investing in expensive hardware.
Despite all of the faults the nearly nonexistent ML/AI hardware benchmarking space currently has, it is important to note that there are currently a couple of open-source hardware benchmark software options available. For example, a select number of enthusiasts and developers have decided that they are not willing to wait around for benchmarking software to be developed. As a result, there are a couple different open-source benchmarking solutions available on GitHub, such as DeepBench, which measure’s a hardware’s ability to efficiently run deep learning algorithms. While our research indicates that there are multiple instances of benchmarking software that measure’s a particular ML program’s efficiency, it is important to note that these benchmarks fall into a different category. In these instances, the program itself, not the hardware being utilized, is what is being tested. Due to this fact, it is hard to say whether such benchmarks really supply much information when it comes to the effectiveness of different hardware options. Overall, it is abundantly clear that there is currently a lack of user-friendly and accessible hardware benchmark software options for consumers.
While it is evident that there is currently a glaring lack of options for ML/AI developers and enthusiasts to benchmark their hardware, this does not have to be the case in the future. The current lack of industry leaders showing interest in this space is certainly discouraging. However, this does not mean that the development of benchmarking software will not occur. On the contrary, there are promising signals from the open-source ML/AI community when it comes to this problem. It would certainly be helpful for companies like Intel, AMD, and Nvidia to contribute to this cause. However, with the number of knowledgeable and highly skilled machine learning and artificial intelligence enthusiasts increasing every day, it is likely that we will see a better software solution for ML/AI hardware benchmarking developed in the near future.