Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you think it's just part of their training set now?


It's time to do "frog on a skateboard" now.



Seems very likely, even if Google has behaved ethically.

Simon and YC/HN has published/boosted these gradual improvements and evaluations for quite some time now.

There is a https://simonwillison.net/robots.txt but it allows pretty much everything, AI-wise.


If it's part of their training set why do the 2B and 4B models produce such terrible SVGs?


We were promised full SVG zoos, Simon. I want to see SVG pangolins please


Larger models better understand and reproduce what's in their training set.

For example, I used to get verbatim quotes and answers from copyrighted works when I used GPT-3.5. That's what clued me in to the copyright problem. Whereas, the smallest models often produced nonsense about the same topics. Because small models often produce nonsense.

You might need to do a new test each time to avoid your old ones being scraped into the training sets. Maybe a new one for each model produced after your last one. Totally unrelated to the last one, too.


Because it is in their training set but it's unrealistic to expect a 2B or 4B model to be able to perfectly reproduce everything it's seen before.

The training no doubt contributed to their ability to (very) loosely approximate an SVG of pelican on a bicycle, though.

Frankly I'm impressed


because generating nice looking svg requires handling code, shapes, long context, reasoning and at 2b you most likely will break the syntax of the file 9 times out of 10 if you train for that. or you will need to go for simpler pelicans. might not be worth to ft on a 2b. but on their top tier open model it is definitly worth it. even not directly but just crawling a github would make it train on your pelicans.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: