I’ve always heard of your “post-architecture” referred to as “evolutionary design”.
I’ve always heard of your “post-architecture” referred to as “evolutionary design”.
In a way, this question itself is very “un-agile”. Agile should be always forward-looking, “What can we do next?”, “What can we get done in this short period of time?”, “What should we do next?”.
OK, so you found a possible “defect” in your system. Is it a defect, or did someone slide in a requirements change 3 months ago?
This reminds me of playing chess. Sometimes a player will make their move when their opponent is distracted. The opponent hears, “Your turn”, and they look at the board. “Which piece did you move?”. The first player just shrugs.
The point is that you shouldn’t need to know which piece just moved. Every chess position is a “state” of its own, and your best move should depend on going forward from that state, not knowing how the board changed recently.
It’s the same thing here. You have a situation. Does it really matter how, when, who or why it happened? It shouldn’t, and here’s why:
Just because it’s a defect (if it is) doesn’t automatically mean that correcting it moves to the top of your “to do” list.
It’s going to take some non-zero amount of time to change it back to blue. And when you’re doing that, you’re not doing something else. There is always an opportunity cost to doing bug fixes and you shouldn’t treat them in an ad-hoc way. Should you be spending that time, and who gets to decide if you do? It’s not your decision. It’s the PO’s decision to make.
Maybe the PO doesn’t care about the colour. Maybe they do care, but not if it means some other feature gets delayed. Maybe it’s the most important aspect of the whole system, and there’s no way you can launch with it green. So you cancel the current Sprint and start a new one dedicated to fixing this defect! Maybe they regret asking for it to be blue, and now they’re happy that it’s now green.
If it was me, I’d get a quick T-shirt size estimate on the work required to change it back to blue, then put it in the Product Backlog to be reviewed with the PO. Maybe have a quick chat with the PO, or send a memo about it. Maybe the PO will need to check with their SME to see if anyone remembers asking for it to be changed to green. Maybe not. In any event, it either makes its way into a Sprint Backlog or it doesn’t.
Also, if you’re doing Agile right, then your clients are getting constant, hands-on, experience with your system as it is being developed. To go 100 days without some kind of “release” that they can play with and give you feedback is an anti-pattern. If you are giving them the latest version every week or two and after almost three months they haven’t noticed that the footer is green, then it’s probably not important.
On to the actual question. Jira is a potential sand trap of administrative complexity. The answer is usually to keep everything smaller. Smaller features, and smaller Sprints. Smaller teams. A team of 5 or 6, working in one week Sprints, can only do so much per Sprint. If your fundamental unit of work - a story, or a feature, or a ticket - is set to take something like 1/2 day and forms the basis of your Sprint Backlog, then each programmer on the team can do at most 10 SB items (in a perfect world). Depending on the composition of your teams, you’re probably going to have only about 3-4 programmers - which means 30-40 tickets per Sprint Backlog. And that’s a blue-sky number that’s practically impossible in a world with meetings. A team of 5 or 6 is going to complete closer to 20 Sprint Backlog items in 1 week Sprint in the real world.
The point being that 14 Sprints is your 100 days and each Sprint has about 20 easy-to-understand items in it. Whatever your management tools, it’s a failure if you can’t locate those 280 features in a matter of seconds. And it should be easy to eliminate 270 of them as not possible places where the change happened just from the description.
And when those SB items are small, as they should be, it’s not an onerous task to document inside them the requirements that they are supposed to meet.
When you have 1 month Sprints with tickets that take 2 weeks to complete, then everything becomes a nightmare. It becomes a nightmare because it’s virtually impossible to impose some kind of consistent organizational structure internally on free-form stuff that big. But it’s almost trivial to do it with tiny tickets.
And the other thing that happens with big tickets is that there’s tons of stuff that programmers do without thinking about it too much. If you’ve got two weeks to finish something, then there’s a ton of latitude to over-reach and the time estimate was just a wild guess anyways. If you have 3 hours to do something, then you’re going to make sure that what you need to do is clearly laid out - and then you have to stick to it or you won’t get done in time.
Did somebody “fix” the inconsistent footer colour while doing some huge, 2 week, ticket? Good luck finding out. But that’s not going to happen with tiny, well documented tickets.
Many, many years ago I used to have two Wyse50 terminals, running split screens each with two parts. I did a lot of support on remote systems (via modem!) and I would have a session on a customer system, source code and running on our test system and internal stuff. I didn’t have space for a third terminal.
At another job I had an office with a “U” shaped desk. I would spread printouts across half the “U” and swivel around between the computer and the printouts.
My first experience with this food was in Halifax decades ago. The Halifax Donair is a unique thing.
And it’s definitely Donair, not Doner.
Technically, he would have three drives and only two drives of data. So he could move 1/3 of the data off each of the two drives onto the third and then start off with RAID 5 across the remaining 1/3 of each drive.
Deal with the ethernet port issue by purchasing a 5 port ethernet switch. Maybe the rest of your issues go away?
In respect to sitting above the API layer and turning DTO’s to/from Domain Object’s, I’d call them “Brokers”.
For me Bazzera Magica and Baratza Vario grinder some time back. Better coffee than most cafes.
There’s two kinds of issues: instance and pattern. The first time or two, it’s instance. You deal with those with specificity. Something like, “I would prefer not to talk about this subject with you, please stop”.
If it persists, then it’s a pattern problem. You deal with the pattern, not the instance. “I’ve asked you not to talk about subjects like this in the pant, but you haven’t stopped. This makes me feel like you don’t respect my boundaries and it’s making it difficult for me to work with you. Why are you doing this to me?”.
You can escalate from there, and this might involve management involvement but at least you’ll have the clarity of having made the situation clear before it gets there.
Honestly though, unless the coworker is actually deranged, they’ll be mortified when they find out they are making you uncomfortable and they’ll stop right away.
I think that a good starting place to explain the concept to people would be to describe a Travesty Generator. I remember playing with one of those back in the 1980’s. If you fed it a snippet of Shakespeare, what it churned out sounded remarkably like Shakespeare, even if it created brand “new” words.
The results were goofy, but fun because it still almost made sense.
The most disappointing source text I ever put in was TS Eliot. The output was just about as much rubbish as the original text.
I think it boils down to where you define the extension functions and how that impacts coupling.
At some level you want to divorce the repository storage of the data from your domain object. Let’s say that the repository changes, and “name” is no longer just “name”, but now “firstName” and “lastName”. The body of your application doesn’t care, or need to know that the repository has changed, as it will still just deal with a name, whatever that is.
So something has to put “firstName” and “lastName” together into a “name”, and it needs to be consistent with how the application has always received it. Is it “Fred Smith”, “Fred, Smith” or “F. Smith”? And who “owns” that logic?
From a coupling perspective, you don’t want the application logic to know anything about the repository or the internal structure of the DTO. On the other hand, you don’t want the repository service layer to know about how the data is going to be used.
Let’s say that you have two different applications that used the “name” field, but in different ways somehow. So the conversion from the two “name” fields into one might be different for each application. Yes, you could argue that recombining them back exactly the way the repository service originally delivered “name” would be transparent to the client applications, but what if the change to the repository was driven by one of those applications needing split data?
That’s usually why you put your adapters in some neutral place, associated with the client application but yet somewhat outside of it.
You could use extension functions to provide the adapter, but you need to make sure that they’re not co-mingled with you application code. Otherwise you’ve just reestablished the coupling between the repository and the application that you where trying to avoid.
I looked and Python has the library support for the GPIO and to do background threading to poll pins. My preference would be to go with a JVM language like Kotlin, but then I’m a programmer. Python, from the little that I’ve mucked about with it is really just one step in complexity from scripting. Maybe even easier, because some things in shell scripts are super difficult to do.
Maybe then you need to move one stop up from scripting into something closer to actually programming. I’d be surprised if Python doesn’t have the library support on a Pi for dealing with both serial and GPIO I/O.
the end stop in external to the serial communication
Does this mean that you have some kind of other signals or pin-outs? If so, this is starting to sound like a great project for a Raspberry Pi, because the GPIO pin array can handle that.
Keep in mind that it has been decades since I last used Kermit, but I’m pretty sure the use case it was originally designed for was…
Connect to a serial port, which had a modem attached. Talk to the modem and get it to dial a number. Presumably, the remote end answered and the port attached to its modem would issue a login prompt. Negotiate the login and then issue a bunch of commands to change directories and then launch Kermit on the remote system. After that Kermit to Kermit communications took over until you terminated the session. Finally, log off the remote system and hang up the modem.
All of this stuff could be done via scripts. I seem to remember that it would actually wait for a response, and then parse the response in the script. I don’t remember ever doing polling loops.
If you’re on a *nix box of some type, it’s totally possible to open up a serial port for manual I/O even in something like a bash script. Even if you have to reverse telnet to a terminal server.
Kermit on top of FTP can work really well. Kermit has its own communication and transfer protocol, IIRC, but updates in the 1990’s allowed it to be used with TCP/IP and FTP. So you can write a script to log into a remote system, run some commands and then initiate a file transfer. The scripting allows you to wait for responses and act on them.
An small InstantPot does the trick just as well, and you can use it for other stuff as well.
So write it properly from the get-go. You can get 90% of the way by naming things properly and following the Single Responsibility Principle.
That used to be really true when I was a kid in the 79’s, but not so much today. Back then, a quality guitar cost way more than the cheap stuff and the cheap stuff was rubbish.
Nowadays, with CNC machines everywhere, there are lots of modestly priced guitars that are very playable. The junk that we used to have to settle with back in the day only exists in the realm of “toy” instruments that almost aren’t intended to be played.
Seriously, $300 can get you a very playable instrument, especially in electric guitars.
It goes really well with YAGNI. Also DRY without YAGNI is a recipe for premature over-architecting.
This is also one of the main benefits of TDD. There was a really good video that I can’t find again of a demonstration of how TDD leads you to different solutions than you thought you use when you started. Because you code exclusively for one single requirement at a time, adding or changing just enough code to meet each new requirement without breaking the earlier tests. The design then evolves.