Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How much has the power efficiency improved between 130nm and 7nm? Is it plausible to get better performance/watt for a custom chip on 130nm vs a software application running on a 7m chip? I get that hardware has other benefits but just wondering for accelerators where the cost/benefit starts to make sense.


> Is it plausible to get better performance/watt for a custom chip on 130nm vs a software application running on a 7m chip?

This very, very much depends on what the algorithm is (integer or FP? how data dependent?), but I would say no for almost all interesting cases.

The only exception would be if you're doing a "mixed signal" chip where some of the processing is inherently analogue and you can save power compared to having to do it with a group of separate chips.

Another exception might be low leakage construction, because that gets worse as the process gets smaller. This is only valuable if your chip is off almost all of the time and you want to squeeze down exactly how many nanoamps "off" actually consumes.


An open source WiFi chip would be super cool. I wonder how easy it would be to take the FPGA code from openwifi[0] and combine it with a radio on the same chip?

[0] https://github.com/open-sdr/openwifi


The problem is that analogue IC design is a field that even digital IC design people regard as black magic. It's clearly possible for that to happen but the set of people who have the skills to do it is very narrow and most of them are probably prevented from doing it in their spare time by their employment agreements.

I wonder how many "test chips" Google will let a non-expert team do to get it right? And whether they provide any "bringup" support?


A big part of the "black magic" really comes down to insufficient tooling. And at least in hardware, insufficient tooling comes down to the fact that everything is open source and trade secret, and teams pretty much refuse to share knowledge with each other.

An open source community would go a long way to fixing an issue like this, and these "black magic" projects are actually a fantastic place for the open source world to get started, because it's an area where there's a ton of room for improvement over the status quo.


They're only allowing parts that stay within the bounds of the PDK (which only allows digital designs) for now.


Even if you could technically make it work, I'd be very nervous around the legalities of that. Or is the Wi-Fi spectrum so unregulated that you can run without any certification at all?


Certification has to do power of the signal and frequency. licensing is not required in some frequency bands like in 2.4 GHz used by WiFi.


WiFi equipment (and pretty much every other radio) requires certification in order to be sold in every country I am aware of. WiFI doesn't require a license to operate, but that doesn't mean you can just use any hardware you like (though I think there may be exceptions for hardware you build yourself, at least in the US).


> Another exception might be low leakage construction, because that gets worse as the process gets smaller. This is only valuable if your chip is off almost all of the time and you want to squeeze down exactly how many nanoamps "off" actually consumes.

No, you actually have more leakage at older nodes, what changes is the ratio of current spent on leakage vs. current spent doing something useful.


Doesn't leakage increase again below 22nm because of tunneling losses, though?

Of course, the lower gate capacitance allows for lower switching losses. But adiabatic computing could theoretically recover switching losses, allowing for higher efficiency at older nodes. That can be approached by using an oscillating power supply for instance, to recover charges. If someone was to design something like this for this run, it could be very interesting.

Now I'm wondering if this isn't some covert recruitment operation by Google: they will likely comb trough application, select the most promising ones, and the designers will get job offers :)


> Doesn't leakage increase again below 22nm because of tunneling losses, though?

You have tunnelling losses on bigger nodes as well, they are just not that dominant. Dielectrics got better as nodes shrank, and this is the reason FinFETs became practical (which switch faster, and more reliably on smaller nodes, but leak worse.)


You won't be able to profitably mine Bitcoin on 130nm ASICs (just as an example)

130nm is almost 20 years old at this point. You can do amazing things with this process but saving power is probably not one of them.


But as an example, you WOULD be able to profitably mine bitcoin on 130nm ASICs if all the rest of the world had was CPUs/GPUs/FPGAs, which was more what the grandparent post was asking: 130nm hardware implementations can be much, much faster and/or energy efficient than a 7nm general-purpose chip which simulates the algorithm.


I wasn't able to find great specifications for the 130nm process, but it looks like the difference in transistor size and efficiency is somewhere around 100x. For specialized applications, going from a CPU to an ASIC is usually around a 1000x performance gain.

So yes, for specific tasks like crypto operations or custom networking, you should be able to make a 130nm ASIC that is going to outperform a 7nm Ryzen. You are not going to be able to make a CPU core that's going to outperform a Ryzen however.


130nm was good enough for 2GHz 30W CPUs back in the day. We are talking almost decoding 1080@30 h264 in software performance.


I suspect, however, that the gap between designs that are realizable for amateurs with limited training, and the ones that are realizable for professional teams is wider than in software.

So somebody like me, who did two standard cell based ASICs 25 years ago, probably would have to add a sizable safety margin to produce a reliable chip, and would achieve nowhere near the performance of a pro team at the time.


I would definitely be rather interested in learning how to design some chips with feature sizes large enough for power handling... I'd love to hear about this as well. This sounds like a clever way to commoditize hardware design, like when printing PCBs became affordable.


Depending what application you have but if you have a relatively narrow and complex application, I would say definitely yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: