You get to choose how your 401k is invested, though. The only difference is a tax advantage.
The advice is just: save money, let it grow using compound interest, use tax laws to your advantage.
There’s no “trust the government” in that advice.
You get to choose how your 401k is invested, though. The only difference is a tax advantage.
The advice is just: save money, let it grow using compound interest, use tax laws to your advantage.
There’s no “trust the government” in that advice.
Are you trying to illustrate the point?
It wasn’t 200, it was 2000.
And while most did not carry guns, they brought other weapons and armor, and used improvised devices as weapons. And some did bring guns. Source: https://amp.cnn.com/cnn/2021/07/28/politics/armed-insurrection-january-6-guns-fact-check/index.html
Thank God they were poorly organized and that the capitol police resisted…but it’s a complete lie to say it was 200 unarmed people.
This is all on video! This isn’t a matter of opinion!
I’m talking about using the ChatGPT API to make a chat bot. Even when the user’s input is just one sentence, it can cause ChatGPT to forget its prompt.
Is it possible to be a productive programmer with slow typing speed? Yes. I have met some.
But…can fast typing speed be an advantage for most people? Yes!
Like you said, once you come up with an idea it can be a huge advantage to be able to type out that idea quickly to try it out before your mind wanders.
But also, I use typing for so many others things: writing Slack messages and emails. Writing responses to bug tickets. Writing new tickets. Documentation. Search queries.
The faster I type, the faster I can do those things. Also, the more I’m incentivized to do it. It’s no big deal to file a big report for something I discovered along the way because I can type it up in 30 seconds. Someone else who’s slow at typing might not bother because it’d take too long.
GPT-3.5 seems to have a problem of recency bias. With long enough input it can forget its prompt or be convinced by new arguments.
GPT-4 is not immune though better.
I’ve had some luck with a post-prompt. Put the user’s input, then follow up with a final sentence reminding the model of the prompt and desired output format.
Also, did you fully cream the butter and sugar before adding any other ingredients?
If you just dump everything into the bowl and then mix, this is what happens
Did you scrape the bowl while mixing?
KitchenAid mixers are great, but depending on what you’re mixing you need to scrape the sides of the bowl with a spatula and then mix some more.
I don’t think it’s over mixed, I think the cookies made from the batter that was stuck to the sides are under mixed.
Sure they do. Look at all of the posts from my neighbors on Facebook and Nextdoor every time a developer tries to build an apartment building instead of a single family home in our neighborhood.
Yeah, don’t do that. Users could accidentally or maliciously type something that would get executed as python code and break your program
This is my vote too.
We have Orbi. I tried using power line to bridge the satellites, but it turned out it was unnecessary. Orbi uses a separate backhaul wireless network between the base and satellites and it worked really well.
I wouldn’t expect Gmail or most web mail hosts to work in a browser that old. Maybe if you used Gmail in basic HTML mode.
Just thinking outside the box here, what about an alarm or chime instead of a lock?
You can’t make it impossible for a child to open. But you can make sure that if they do open it, you’ll know.
I’m a fan of randomizing the test order. That helps catch ordering issues early.
Also, it’s usually valuable to have E2E tests all be as completely independent as possible so it’s impossible for one to affect another. Have each one spin up the whole system, even though it takes longer. Use more parallelism, use dozens of VMs each running a fraction of the tests rather than trying to get the sequential time down.
I think the reality is that there are lots of different levels of tests, we just don’t have names for all of them.
Even unit tests have levels. You have unit tests for a single function or method in isolation, then you have unit tests for a whole class that might set up quite a bit more mocks and test the class’s contract with the rest of the system.
Then there are tests for a whole module, that might test multiple classes working together, while mocking out the rest of the system.
A step up from that might be unit tests that use fakes instead of mocks. You might have a fake in-memory database, for example. That enables you to test a class or module at a higher level and ensure it can solve more complex problems and leave the database in the state you expect it in the end.
A step up from that might be integration tests between modules, but all things you control.
Up from that might be integration tests or end-to-end tests that include third-party components like databases, libraries, etc. or tests that bring up a real GUI on the desktop - but where you still try to eliminate variables that are out of your control like sending requests to the external network, testing top-level window focus, etc.
Then at the opposite extreme you have end-to-end tests that really do interact with components you don’t have 100% control over. That might mean calling a third-party API, so the test fails if the third-party has downtime. It might mean opening a GUI on the desktop and automating it with the mouse, which might fail if the desktop OS pops up a dialog over top of your app. Those last types of tests can still be very important and useful, but they’re never going to be 100% reliable.
I think the solution is to have a smaller number of those tests with external dependencies, don’t block the build on them, and look at statistics. Sound an alarm when a test fails multiple times in a row, but not for every failure.
Most of the other types of tests can be written in a way to drive flakiness down to almost zero. It’s not easy, but it can be doable. It requires a heavy investment in test infrastructure.
Check out Linear. The startup I was at nearly switched to Jira and then thankfully when a bunch of us protested, we tried Linear and ended up really loving it.
Actually I’m going to disagree strongly with that statement.
Small business are far, far worse at abusing workers. If a small business fires you, you’ve got absolutely no recourse. They can lay you off with no severance and then hire someone new a day layer, and who’s going to do anything about it? They don’t have that many employees so there’s no pattern and no class-action, and you can’t afford to hire a lawyer to spend years fighting them in court.
In comparison, when you work at a big company, they have rules and an HR department to make sure they’re going everything legally. Your boss wants to fire you? First your boss has to give you a negative performance review detailing exactly what you’re doing wrong. Then they have to give you an opportunity to correct it. Only then can they fire you. At an absolute minimum, it gives you a chance to start looking for a new job. Often it gives you a chance to transfer within the company, if you were otherwise a well-liked and valuable employee.
If a large company wants to let you go, they’re going to give you severance pay and extended benefits.
Of course you hear about the occasional incident where Elon Musk fires someone on the spot or a Disney employee gets reprimanded for something silly. But those incidents are extremely rare, and most of the time they end up settling behind the scenes for a nice severance.
Now, I know, I know. The HR department is there to protect the company, not you. But that’s exactly why the HR department ensures employees are treated well, even when they’re fired - because they don’t want a lawsuit later.
I have a hard time reconciling that with my observations in Europe:
I’ve never felt like European drivers were “more safe”.
The only differences I can think of that are positive for Europe:
GNU gets credit for the GPL, and for being the first major project to start to create a free Unix operating system. So it’s true that when the Linux kernel was first released, the fact that you could boot a usable operating system on top of it was due to GNU.
But…the success of what most of us just call “Linux” since then is due to thousands of individuals and organizations other than GNU. The vast majority of free software running on top of a Linux operating system has nothing to do with GNU and is not licensed under the GPL.
Let’s say I’m running Linux on a server, for a small app running the MERN stack. Literally none of the MERN stack is GNU.
Let’s say I’m running Linux on a desktop. I’m depending on Wayland, KDE, Chromium, VSCodium, and a dozen other tools, none of which are GNU.
However, the fact that I can use the same OS to run a tiny embedded device or a superpowered server, that’s due to the Linux kernel and the thousands of individuals, organizations, and companies who have made it into the most efficient and versatile operating system kernel in the world, period.
So to me, I have no problems at all calling the operating system “Linux”.
What happened? Was this at work or home?
Certainly many others would have tried to invent something like the web.
HyperCard predated the web browser and had the concept of easy to build pages that linked. Lots of people were working on ways to deliver apps over the Internet.
I think in some alternative timeline we’d still have a lot of interactive content on the Internet somewhat like the web, but probably based on different technology. Maybe more proprietary.