• 9 Posts
  • 1.53K Comments
Joined 2年前
cake
Cake day: 2023年10月4日

help-circle
  • goes looking for the issue

    PostgresSQL has a limit of 65,535 parameters, so bulk inserts can fail with large datasets.

    Hmm. I would believe that there are efficiency gains from doing one large insert rather than many small — like, there are probably optimizations one can take advantage of in rebuilding indexes — and it’d be nice for database users to have a way to leverage that.

    On the other hand, I can also believe that DBMSes might hold locks while running a query, and permitting unbounded (or very large) size and complexity queries might create problems for concurrent users, as a lock might be held for a long time.

    EDIT: Hmm. Lock granularity probably isn’t the issue:

    https://stackoverflow.com/questions/758945/whats-the-fastest-way-to-do-a-bulk-insert-into-postgres

    One way to speed things up is to explicitly perform multiple inserts or copy’s within a transaction (say 1000). Postgres’s default behavior is to commit after each statement, so by batching the commits, you can avoid some overhead. As the guide in Daniel’s answer says, you may have to disable autocommit for this to work. Also note the comment at the bottom that suggests increasing the size of the wal_buffers to 16 MB may also help.

    is worth mentioning that the limit for how many inserts/copies you can add to the same transaction is likely much higher than anything you’ll attempt. You could add millions and millions of rows within the same transaction and not run into problems.

    Any lock granularity issues would also apply to transactions.

    Might be concerns about how the query-processing code scales.





  • There’s room in the market for a huge number of regular games, but with live-service games, only a handful of winners can ever really succeed, creating an eye-watering risk profile for any new entrant into the market.

    Ehhh. I mean, I agree with the general idea that there have been far too many live-service games chasing too few players, but I think that it’s probably possible to create lower-budget, niche-oriented live service games that appeal very strongly to a particular group rather than trying to get the whole world onboard.

    That’s true of non-live-service games. I like some milsims, like Rule the Waves 3, that are just never going to become a mass market phenomenon. That’s fine, because that’s not what the publisher is aiming to do with the game, and has budgeted accordingly. They’re going after a particular group with specific interests.

    But if you want to do that, that means that the interest in your niche by players has to be sufficient to overwhelm the fact that you aren’t going to have the playerbase and thus budget that a game with more general appeal would.


  • I’d also add that the Threadverse brought some really new and interesting things to the table.

    • By default with all current Threadiverse software packages, instances are public, and there are many public instances. This means that while an instance might have downtime, it is very, very likely that I can continue to browse content, and if I’m willing to set up an account on a second home instance, even post. Early Reddit had a lot of downtime issues, and when it went down, it was down.

    • There’s a lot more technical advancement on the Threadiverse than was happening on late Reddit.

    • The third-party software ecosystem is very strong. It’s not just the PieFed, Lemmy, and Mbin guys writing all the software. There are a ton of clients, monitoring systems, status dashboards, you name it. Reddit had third party software too, but I feel like people are a lot more willing to commit effort to an open system.

    • I think that having competing instance policy is important. I don’t know yet whether, in the long run, this is going to wind up with largely- or entirely-decoupled Threadiverse “networks” of federated hosts split along defederation fissures, kind of like happened with IRC. I hope that it can remain mostly-connected. But I don’t want to have some party somewhere deciding content policy for all of the Threadiverse. With Twitter, Reddit, Facebook, whatever, there’s some single central controlling authority with monopoly access over the entire system. That doesn’t exist on the Threadiverse, and I am a lot happier for that. There will probably be people out there saying things that I don’t agree with or like, but that’s okay; I don’t have to look at it. The same is true of the Web. I really take issue with someone whose positions I don’t agree with acting as a systemwide censor (I’d also add that while I’m not really enthusiastic about the Lemmy devs admin decisions on lemmy.ml, I have not seen them attempt to do this even Lemmy-wide, much less Threadiverse-wide). That’s a real difference from Reddit. If your instance admin says that tomorrow, all content needs to be posted in all caps, you can migrate your community or home instance or community usage to another instance, and other users who feel the same way can do the same. With any disagreement with Reddit site-wide policy, your option is only to leave Reddit entirely. It’s Spez’s way or the highway. I don’t think that that’s reasonable for a system that aspires to be a system for the whole world.


  • Reddit ended support for their API which killed off 3rd party apps and the official one sucked.

    Same, though with the modification that I wasn’t going to run the official app regardless of whether it sucked or not.

    There were also some longer-run issues that weren’t enough to make me leave the site, but made it less-preferable than it had been at one point. They just hadn’t broken the camel’s back. I didn’t like the shift to the new Web UI, and there were some minor compatibility breakages between the new and old Web UI. I wasn’t enthusiastic about some of the policy changes that had happened over the years. I thought that the change to how blocking worked was a really bad idea, caused people to severely abuse the thing in conversation threads to prevent people from responding to their points. I was more-interested in the stuff that the earlier userbase had been interested in, though I’ll concede that one could mitigate that by limiting what subreddits one subscribed to.

    I’d also always preferred the federated structure of Usenet to Reddit — but Usenet had crashed into crippling spam problems and hadn’t resolved them. I also think that some decisions that Reddit made were the right ones, like permitting editing of comments. There are some problems with editable comments, and someone could always have grabbed an earlier copy — but people correcting errors and cooling down flamewars where they fired off a kneejerk insult or something and then went back and toned it down wound up being a net positive of Reddit relative to Usenet, Slashdot, and so forth. On the Threadiverse, I could enjoy Usenet-like federation and still have Reddit-like editable comments.

    So when Reddit killed the third-party API stuff off, it was really a “straw that breaks the camel’s back” moment. It wasn’t that my sole concern was killing the third-party API stuff, though I certainly was unhappy about that. I’d expected some eventual changes for monetization, but hadn’t expected it to include trying to mass-shovel users onto the official app. But it was that the sum total of changes combined with the Threadiverse becoming available meant that I’d rather be on the Threadiverse.



  • “AI’s natural limit is electricity, not chips,” Schmidt said, cutting through the industry’s semiconductor obsession with characteristic bluntness.

    I mean, maybe in the very long term that’s a fundamental limit, and you face things like Dyson spheres.

    But right now, I’m personally running one human-level AGI on roughly 100W of power, so I’m just gonna say that as things stand, the prominent limitation is software not being good enough. You’re, like, a software guy.

    Ultimately AI is an optimization problem, and if we don’t know how to solve the software problems fully yet, then, yeah, we can be inefficient and dump some of the heavy lifting on the hardware guys to get a small edge.

    But I’m pretty sure that the real breakthrough that needs to happen isn’t on the hardware side. Like, my existing PC and GPU already are more capable than my brain from a hardware standpoint. The hardware guys have already done their side and then some compared to human biology. It’s that we haven’t figured out the software to run on them to make them do what we want.

    The military or whoever needs AI applications can ask for more hardware money to get an edge relative to competitors. But if you’re the (well, ex-) head of Google, you’re where a lot of those software and computer science guys who need to make the requisite software breakthroughs probably are, or could be. Probably the last people who should be saying “the hardware guys need to solve this”.

    It’s going to be some more profound changes to what we’re doing in software today than just tweaking the parameters on some LLM model, too. There’s probably some hard research work that has to be done. It’s not “we need immense resources dumped into manufacturing more datacenters and powerplants, and chips”. It’s translating money into having some nerdy-looking humans bang away in some office somewhere and figure out the required changes to what needs to be done in software to get us there. Once that happens, then okay, sure, one needs hardware to make use of that software. But in July 2025, we don’t have the software to run on that hardware, not yet.












  • Making new

    • Making something new requires simpler processes that are easy to automate.

    • Making something new may involve the same series of steps done many times, so one can take advantage of economies of scale. You obtain raw materials or parts, and they’re all handled in the same way, many times over.

    • You only need to deal with assembly, not disassembly.

    Repair

    • Devices don’t fail in the same way, so repairs tend to be unique and not amenable to taking advantage of economies of scale.

    • Unless you repair a device in the same way multiple times, the scale is necessarily going to be less than manufacturing new, since if you manufacture N devices, you can’t be repairing more than N devices; again, not friendly to economies of scale. Even with things like automobiles that are designed with the intention of being easy to repair, older cars become increasingly less-practical to repair as the pool of cars of a particular year and model shrink over time as some become unrepairable and head to junkyards.

    • If repair requires components, those components may need to be manufactured and then warehoused until repair is required, so the storage cost also adds to the cost of repair.

    • Repairing is a complex process that likely differs from device to device that is hard to automate.

    • Repair involves not just reassembly, but also disassembly.

    • Especially if a device was not specifically designed to be simple to repair and especially if scale is not high, repair may involve (expensive) skilled labor from someone who has to be able to craft a specific repair process for this particular device being repaired.

    • Repair involves diagnosis of the problem. Diagnosis may be an extraordinarily difficult process — e.g. trying to diagnose a failure inside of a chip without destroying it is something that we may not be able to do today, and the skillset or automated system required to do it may be very complex. Intel spent ages just trying to understand why there were failures in the last two generations of their chips, a situation where they didn’t care at all about destroying chips that they’d use for diagnosis, and they weren’t trying to repair a single chip, but to fix a process that involved huge numbers of chips. Maybe an electrical engineer could diagnose a problem on a device, but his work will repair only a single device. The time of that same electrical engineer could be used to improve a manufacturing process that could produce many more new devices.

    Other

    • When you repair a device, you get a device which may have other worn components. At some point, something else will fail. If you manufacture a new device, you get all new parts; you “reset the clock” on everything.

    • When you repair a device, you get an older device. In many cases, due to the advance of technology, a newer device is preferable. That may not always be true — consider, say, an antique made by a specific artist centuries ago, where the value is in part that that particular person made the thing. But for most functional purposes, something made using present-day technology beats stuff made in the past. And that’s a more-important factor the greater the rate of advance in the field of whatever good it is that you’re trying to repair.

    • Repair may compete with recycling.

    We live in a world that, due to global trade and large population and probably access to interchange languages, has far more potential for scale than ever before in the past, so economies of scale can be pretty important — design a device once, make a very efficient process for making it, and you can sell to many, many people. Billions of people, even. You couldn’t do that a few hundred years ago, because the world was simply too disconnected. That’s a lot of potential for economy of scale. And economies of scale are, I think, generally more-friendly to manufacturing new.

    I can’t think of many factors that would cause repair to become more-efficient versus manufacturing new relative to where they are today. I think that when we get AGI, we may be able to reduce the cost of skilled labor and complex automation, both used in repair, so far that repairing could see a bit of a renaissance. Maybe if we become a multiplanetary species in the future, travel into space and live elsewhere, then we’ll have small populations that are mostly cut off from the rest of the population, and economies of scale will greatly decrease, and repair will become more worthwhile — you fix something on Mars because there aren’t enough people on Mars for building new to make sense, and sending a new item from Earth costs too much. Maybe we’ll crash into fundamental physical limits and the rate of technological advance in many fields will slow way down, so a newer device won’t have many more benefits over an older device. Maybe some sort of new types of goods that are fantastically-expensive and only required in small scale will become increasingly important, and for them repair will be more important than for goods that are produced at large scale. But outside of that, I think that most of the factors will favor manufacturing new, and if anything, probably continue to do so even more than they do today.