Hello, I’m an archivist who does things.

E? E.

  • 0 Posts
  • 21 Comments
Joined 9 months ago
cake
Cake day: March 19th, 2024

help-circle






  • FPGAs are good fun, and some of the stuff I’m working on in particular gets even crazier. My current project is emulating a partially analog soundchip (the 6581 and 8580 SIDs) with 32 bit integers, because FPGAs can’t do analog. The best part is, it actually (mostly) works. Still have coefficient issues with the RC circuits, and the Rf1 and Rf2 voltage-controlled resistor coefficient tables need to be recalculated, but it’s already looking pretty good.

    Good fun lol


  • Aa far as I’m aware, incremental synthesis is vivado trying to build a new FPGA bitstream by modifying a snapshot of the previous build, to ostensibly save time. Because the SID FPGA implementation is a relatively small part of the MEGA65 core, it really likes to forget to add any changes I make, especially related to timing optimization (it took me so long to figure out it had re-enabled itself, after disabling it my total negative slack was cut in half due to it finally registering all the pipelining and other optimization). I’ve also had vivado outright lock up with some cases.


  • Do repos on GitHub and assorted messages on text-based communication platforms count as content? Because if that’s the case, then all the time, because I generally write stuff down in case I proceed to forget exactly what that function did or why I calculated this bypass coefficient like this or why for the love of fuck does vivado keep reverting to incremental synthesis and how did I fix it last time aaaaaaaaaaaaaaaaaaaaaaaa

    As for if my random technical nonsense has any bearing on the world, not really, outside of maaaybe the demoscene if the SID stuff works out, and the few people who like reading my ramblings for some reason.





  • People can also stop saying words and think for a second about the information they’re actually saying first, whereas an LLM just vomits up words that seem to match the pattern of the rest of the sentence. If I were to ask you what 2 + 2 is, you’d stop, run the math in your head, get 4, then reply with 4. An LLM would just start vomiting out words based on what it’s been trained on without verifying that the information is good (or even relevant), and can end up confidently telling you that 2 + 2 is in fact equal to the cube root of 5 because that’s what the data said so it has to be right, for instance.

    I’m aware this is a drastic oversimplification, and I think the tech is neat (although I avoid non-self-hosted models like the plague due to privacy concerns), but it’s oversold to all hell, and is definitely not even close to intelligent.