• 1 Post
  • 63 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle





  • Quick google shows that Kanban is a method. Mainlu around picking up things as the come, but also limiting how much can happen at once.

    The project I’m has a team that uses Kanban for the “Maintenance” tasks/development, take what is at the top of the board and do it. Adapt if higher priority things comes around, such as prod bugs. Our developments teams are trying to implement Scrum, where interruptions are to be avoided if possible during sprints. You plan a sprint, try to do that work, and can present it, and iterate when users inevitably changes criteria.

    In the meme, kanban does somewhat make sense, since getting armrests is never going to get a high priority as part of building a rocket. Scrum isn’t exactly right, but I can see where it’s coming from. They are all agile methods though.


  • I kinda get where he is coming for though. AI is being crammed into everything, and especially in things where they are not currently suited to be.

    After learning about Machine learning, you kind realize that unlike “regular programs” that ML gives you “roughly what you want” answers. Approximations really. This is all fine and good for generating images for example, because minor details being off of what you wanted probably isn’t too bad. A chat bot itself isn’t wrong here, because there are many ways to say the same thing. The important thing is that there is a definite step after that where you evaluate the result. In simpler ML you can even figure out the specifics of the process, but for the most part we evaluate what the LLM said or if the image is accurate to our expectations. But we can’t control or constrain the output to exactly our needs, because our restrictions largely are just input in a almost finished approximation engine.

    The problem is, that companies take these approximation engines, put them in their product and consider their output fact. Like Ai chatbots doing customer support, and make up facts like the user that was told about rules that didn’t exist for an airline, or the search engines that parrot jokes or harmful advice. Sure you and I might realize that these things come from a machine that doesn’t actually think about it’s answers, but others don’t. And throwing a “*this might be wrong because its AI” on it is not an acceptable waiver of accountability.

    Despite this, I use chatgpt and gemini a lot to help me program, they get a lot of things wrong but also do great. It’s a great tool, exactly because I step in after the approximation step, review and decide. I’m aware of the limits. But putting these things in front of “users” without a review step means you are advertising that you are either unaware of this flaw, or just see the cost-benefit analysis and see that if noting else it’ll generate interest during the hype.

    There is a huge potential, but throwing AI into a situation where facts are needed when it’s only making rough guesses, is the wrong way about it.








  • It’s worth adding I greatly prefer MS Auth style authentication, since I don’t have to find the right entry to read the Auth code and then write it on the other computer. Instead MS pops a notification and you either type or select the right number, verify with fingerprint and done. Much more convenient.

    It often tells you what you login into and where you are attempt to log in from, so it’s a few extra layers of security for those that have that awareness to check those details.







  • Our main motivator was, and is, that manual testing is very time consuming and uninteresting for devs. Spending upwards of a week before a release because the teams has to setup, pick and perform all featue tests again on the release candidate, is both time and money. And we saw things slip through now and then.

    Our application is time critical, legacy code of about 30 years, spread between C# and database code, running in different variations with different requirements. So a single page may show differently depending on where it’s running. Changing one thing can often affect others, so for us it is sometimes very tiresome to verify even the smalles changes since they may affect different variants. Since there is no automated tests, especially for GUI (which we also do not unit test much, because that is complicated and prone to breaking), we have to not only test changes, but often check for regression by comparing by hand to the old version.

    We have a complicated system with a few intergrations, setting up all test scenarios not only takes time during testing, but also time for the dev to prepare the instructions for. And I mentioned calculations, going through all motions to verify that a calculated result is the same between two version is a awfully boring experience when that is exaclty something automated tests can just completely take over for you.

    As our application is projected to grow, so does all of this manual testing required for a single change. So putting all that effort into manual testing and preparation can intsead often just be put on making tests that check requirements. And once our coverage is good enough, we can only manuall test interfaces, and leave a lot of the complicated edge cases and calculcation tests to automated tests. It’s a bit idealistic to say automated tests can do everything, but they can certainly remove the most boring parts.