Aaand it’s been fixed as of 52 minutes ago. Bravo!
Anarchist, autistic, engineer, and Certified Professional Life-Regretter. If you got a brick of text, don’t be alarmed; that’s normal.
Aaand it’s been fixed as of 52 minutes ago. Bravo!
So because you reminded that Clown Core exists, I caught up with their recent discography on my phone with headphones. However, about five minutes ago the audio from computers on my phone “suddenly cut out”. I think my mom started up her car, and because I connected the Bluetooth to the car at some point in the future past, the phone instantly decided “yeah let’s pump the user’s audio into this device, I’m sure that’s what he wants”, even though headphones were plugged in, without prompting me 😳.
I.e., I think my mom got a few seconds of surprise Clown Core.
She knows I like metal but I don’t know how to explain this 🤣.
Nobody please, thank you
If by “left” and “right” you mean Democrats and Republicans respectively, then I’m going to ignore all their rhetoric to the best of my ability just as I have been. I’m not interested in anything even remotely resembling either programme.
When you talk about a sample, what does that actually mean?
First, the sound in the real world has to be converted to a fluctuating voltage. Then, this voltage signal needs to be converted to a sequence of numbers.
Here’s a diagram of the relationship between a voltage signal and its samples:
The blue continuous curve is the sine wave, and the red stems are the samples.
A sample is the value [1] of the signal at a specific time. So the samples of this wave were chosen by reading the signal’s value every so often.
Like I recognize that the frequency of oscillations will tell me the pitch of something, but how does that actually translate to a chunk of data that is useful
One of the central results of Fourier Analysis is that frequency information determines the time signal, and vice versa [2]. If you have the time signal, you have its frequency response; you just gotta run it through a Fourier Transform. Similarly, if you have the frequencies that made up the signal, you have the time signal; you just gotta run it through an inverse Fourier Transform. This is not obvious.
Frequency really comes into play in the ADC and DAC processes because we know ahead of time that a maximum useful frequency exists. It is not trivial to prove this, but one of the results of Fourier Analysis is that you can only represent a signal with a finite number of frequencies if there is a maximum frequency above which there is no signal information. Otherwise, a literally infinite number of numbers, i.e. an infinite sequence, would be required to recover the signal. [2]
So for sampling and representing signals, the importance of frequency is really the fact that a maximum frequency exists, which allows our math to stop at some point. Frequency also happens to be useful as a tool for analysis, synthesis, and processing of signals, but that’s for another day.
You mention a sample being stored as a number, which makes sense, but how is that number utilized?
The number tells the DAC how big a voltage needs to be sent to the speaker at a given time. I run through an example below.
Again assuming uncompressed, if my sample “value” comes up as 420, does that include all of the necessary components of that sound bite in a 1/44100th of a second? How would a sample at value 421 compare?
The value of a sample with value 420 is meaningless without specifying the range that samples are living in. Typically, we either choose the range -1 to 1 for floating point calculations, or 2^(n-1) to (2^(n-1) - 1) when using integer math [7]. If designed correctly, a sample that’s outside the range will be “clipped” to the minimum or maximum, whichever is closer.
However, once we specify a digital range for digital signals to “live in”, if the signal value is within range, then yes, it does in fact contain all the necessary components [6] for that sound bite in a 1/44100th of a second?
As an example [3], let’s say that the 69th sample has a value of 0.420, or x[69]=0.420. For simplicity, assume that all digital signals can only take values between Dmin = -1 and Dmax = 1 for the rest of this comment. Now, let’s assume that the DAC can output a maximum voltage of Vmax = 5V and a minimum voltage of Vmin = -7V [4]. Furthermore, let’s assume that the relationship between the digital signal is exactly linear, and the sample rate is 44100Hz. Then, ([69+1]/44100) seconds after the audio begins, regardless of what happened in the past, the DAC will be commanded to output a voltage Vout (calculated below) for a duration of (1/44100) seconds. After that, the number specified by x(70) will command the DAC to spit out a new voltage for the next (1/44100) seconds.
To calculate Vout, we need to fill in the equation of a line.
Vout(x) = (Vmax - Vmin) / (Dmax - Dmin) × (x - Dmin) + Vmin
Vout(x) = (5V - (-7V)) / (1 - (-1) × (x - (-1)) + (-7V)
Vout(x) = 6(x + 1) - 7 [V]
Vout(x) = 6x + 6 - 7 [V]
Vout(x) = 6x - 1 [V]
As a check,
Vout(Dmin) = Vout(-1) = 6×(-1) - 1 = -7V = Vmin ✓
Vout(Dmax) = Vout(1) = (6×1) - 1 = 5V = Vmax ✓
At this point, with respect to this DAC I have “designed”, I can always convert from a digital number to an output voltage. If x>1 for some reason, we output Vmax. If x<1 for some reason, we output Vmin. Otherwise, we plug the value into the line equation we just fitted. The DAC does this for us 44100 times per second.
For the sample x[69]=0.420:
Vout(x[69]) = 6•x[69] - 1 [V] = 6×0.420 - 1 = 1.520V.
A sample value of 0.421 would yield Vout = 1.526V, a difference of 6mV from the previous calculation.
And how does changing a sample from 0.420 to 0.421 affect how it’s going to sound? Well, if that’s the only difference, not much. They would sound practically (but not theoretically) identical. However, if you compare two otherwise identical tracks except that one is rescaled by a digital 1+0.001, then the track with the 1+0.001 rescaling will be very slightly louder. How slight really depends on your speaker system.
I have used a linear relationship because:
However, as long as the relationship between the digital value and the output voltage is monotonic (only ever goes up or only ever goes down), a designer can compensate for a nonlinear relationship. What kinds of nonlinearities are present in the ADC and DAC (besides any discussed previously) differ by the actual architecture of the ADC or DAC.
Is this like a RGB type situation where you’d have multiple values corresponding to different attributes of the sample (amplitude, frequencies, and I’m sure other things)?
Nope. R, G, and B can be adjusted independently, whereas the samples are mapped [5] one-to-one with frequencies. Said differently: you cannot adjust sample values and frequency response independently. Said another way: samples carry the same information as the frequencies. Changing one automatically changes the other.
Is a single sample actually intelligible in isolation?
Nope. Practically, your speaker system might emit a very quiet “pop”, but that pop is really because the system is being asked to quickly change from “no sound” to “some sound” a lot faster than is natural.
Hope this helps. Don’t hesitate to ask more questions 😊.
[1] Actually, it is ideally proportional to the value of the sample, what is termed a (non-dynamic) linear relationship, which is the best you can get with DSP because digital samples have no units! In real life, it could be some non-linear relationship with the voltage signal, especially if the device sucks.
[2] Infinite sequences are perfectly acceptable for analysis and design purposes, but to actually crunch numbers and put DSP into practice, we need to work with finite memory.
[3] Sample indices typically start at 0 and must be integers.
[4] Typically, you’ll see either a range of [0, something] volts or [+something, -something] volts, however to expose some of the details I chose a “weird” range.
[5] If you’ve taken linear algebra: the way computers actually do the Fourier Transform, i.e. transforming a set of samples into its frequencies, is by baking the samples into a tall matrix, then multiplying the sample matrix by a FFT matrix to get a new matrix, representing the weights of the frequencies you need to add to get back the original signal. The FFT transformation matrix is invertible, meaning that there exists a unique matrix that undoes whatever changes the FFT matrix can possibly make. All Fourier Transforms are invertible, although the continuous Fourier Transform is too “rich” to be represented as a matrix product.
[6] I have assumed for simplicity that all signals have been mono, i.e. one speaker channel. However, musical audio usually has two channels in a stereo configuration, i.e. one signal for the left and one signal for the right. For stereo signals, you need two samples at every sample time, one from each channel at the same time. In general, you need to take one sample per channel that you’re working with. Basically, this means just having two mono ADCs and DACs.
[7] Why 2^n and not 10^n ? Because computers work in binary (base 2), not decimal (base 10).
Short answer: to record a sound, take samples of the sound “really really often” and store them as a sequence of numbers. Then to play the sound, create an electrical signal by converting those digital numbers to a voltage “really really often”, then smooth it, and send it to a speaker.
Slightly longer answer: you can actually take a class on this, typically called Digital Signal Processing, so I’m skipping over a lot of details. Like a lot a lot. Like hundreds of pages of dense mathematics a lot.
First, you need something to convert the sound (pressure variation) into an electrical signal. Basically, you want the electrical signal to look like how the audio sounds, but bigger and in units of voltage. You basically need a microphone.
So as humans, the range of pitches of sounds we can hear is limited. We typically classify sounds by frequency, or how often the sound wave “goes back and forth”. We can think of only sine waves for simplicity because any wave can be broken up into sine waves of different frequencies and offsets. (This is not a trivial assertion, and there are some caveats. Honestly, this warrants its own class.)
So each sine wave has a frequency, i.e. how long many times per second the wave oscillates (“goes back and forth”).
I can guarantee that you as a human cannot hear any pitch with a frequency higher than 20000 Hz. It’s not important to memorize that number if you don’t intend to do technical audio stuff, it’s just important to know that number exists.
So if I recorded any information above that frequency, it would be a waste of storage. So let’s cap the frequency that gets recorded at something. The listener literally cannot tell the difference.
Then, since we have a maximum frequency, it turns out that, once you do the math, you only need to sample at a frequency of exactly twice the maximum you expect to find. So for an audio track, 2 times 20000 Hz = 40000 times per second that we sample the sound. It is typically a bit higher for various technical reasons, hence why 44100 Hz and 48000 Hz sample frequencies are common.
So if you want to record exactly 69 seconds of audio, you need 69 seconds × 44100 [samples / second] = 3,042,900 samples. Assuming space is not a premium and you store the file with zero compression, each sample is stored as a number in your computer’s memory. The samples need to be stored in order.
To reproduce the sound in the real world, we feed the numbers in the order at the same frequency (the sample frequency) that we recorded them at into a device that works as follows: for each number it receives, the device outputs a voltage that is proportional to the number it is fed, until the next number comes in. This is called a Digital-to-Analog Converter (DAC).
Now at this point you do have a sound, but it generally has wasteful high frequency content that can disrupt other devices. So it needs to get smoothed out with a filter. Send this voltage to your speakers (to convert it to pressure variations that vibrate your ears which converts the signal to an electrical signal that is sent to your brain) and you got sound.
Easy peazy, hundreds of pages of calculus squeezy!
could monkeys typing out code randomly exactly reproduce their exact timbre+tone+overall sound
Yes, but it is astronomically unlikely to happen before you or the monkeys die.
If you have any further questions about audio signal processing, I would be literally thrilled to answer them.
To understand memes
I understand the last part, but I want, in my life, to at least try a career, try my hand at it. Not sure how to explain.
No I get it. I’m in the same boat, I’m still trying to get a job and I really just want to start participating in the engineering world. It’s just so hard to be allowed in.
you might be looking up what countries you can get to with low cost of living soon.
Literally years and years ago.
I’d leave over nothing, if I could I’d get on a plane right fucking now without even saying goodbye to anyone, but I’d also be willing to flee over my $35000 and rising student loans.
I gotta warn you, as an autistic person who graduated last year with an engineering degree…shit sucks. Half the applications are fake, half the interviews are fake just to scare the overworked employees. The hiring managers are perfectly willing to waste your fucking time justifying the existence of their jobs. I’ve applied for over 350 jobs and internships and gotten zero offers. Same with my classmates. Expect multiple rounds (3-6, maybe more) before getting an offer.
And engineering was supposed to be a “safe” degree. I can’t imagine how much harder it is for humanities.
It’s honestly about who you know, then how wealthy and privileged you already are, if you currently have a job, then how personable you are, then a whole bunch of factors I haven’t been able to identify, and then at the very end, how competent you’ll be in the role.
Make sure to go to your school’s career fair. Dress up as much as possible and bring ~69 copies of your resumé (yes, around seventy, but I’m a manchild so I actually printed exactly 69 last career fair) and hand them all out to employers you can tolerate working for. Typically, you will be expected to know about their company’s work and what positions they have available. I noticed that a lot of companies are there just for brand recognition, i.e. they’re wasting your time. If they’re not wasting your time, there’s a good chance that the person standing there is either a braindead hiring manager or your direct supervisor, or anything in between. At my college, the companies actually list the positions they’re hiring for. If there are none, I don’t go to that company, because they’re wasting my time or aren’t serious enough to fill out the paperwork.
If your school publishes the employers who will be at the fair, make sure to scan through the list and target employers you want to talk to. Many employers have long lines, so plan accordingly. As an anarchist, I also do a bit of research on each company to make sure they’re not defense contractors, police collaborators, prison contractors, etc. This eliminates a third to a half of the possible employers at my school.
Career fairs are, from my experience, emotionally and physically draining events that need several days of preparation to get any benefit, and several days of recovery. They are surprisingly loud (bring inconspicuous headphones or earplugs).
Make sure you have experience in the field you’re applying to work in, even if (especially if) a job posting says you don’t need it. They’re lying. They’re always lying. They basically don’t want to train you at all. Experience in my field is internships and other free work, or a previous job. Research does not seem to count as experience. I hope your field is different.
Don’t give out your personal info over email to a job posting. Don’t do email interviews; make sure you see an actual moving human, be it over a video call or in person. Got my identity stolen that way. And don’t work for a company that will make you cash a big check (about $5000, right up to the deposit limit for online banking) for “office supplies”. It’s a scam. However, legitimate companies will also ask you for basically the same information and store it in an equally insecure plain-text database, and you’re expected to provide it.
For DEI stuff, you can fill it out, or not fill it out, or whatever floats your boat. For example, I fill out that I am Hispanic, but not that I’m autistic. I dunno; I just don’t trust engineers to be cool with an openly autistic person based on literally every engineer or engineering-adjacent person I’ve ever met in person ever.
Besides letters of recognition, make sure you have people you can use as references who are actually willing to be contacted by phone.
Technically, you should tweak your resumé for every position. However, because I’m so done with this shit forever, I basically keep a few classes of resumé for different job types. For example, I have a “generic” electrical engineering template, a “control systems” template, and a “data science/software” template. If there’s an opportunity I really want, only then will I tweak it by mirroring the content of the job post. It’s super important for your resumé to be searchable, because the employer is probably going to just do a Ctrl+F to find relevant terms.
Make sure to also have a plain-text version of your resumé lying around. A common pattern is for the employer to have you upload a copy of your resumé and not even fucking attempt to parse it, meaning that you have to re-enter all its information by hand into their shitty form. Generally speaking, you should be expecting to spend about 15 minutes per application.
Don’t put absolutely everything on the resumé. You need to leave some stories for interviews.
Do your phone and Zoom interviews in front of a computer with a text editor open. I actually take notes during and after the interview, and then commit it to a remote repo so I can pull it onto any computer and get all my notes from all phone calls. You should also have a copy of the resumé you actually submitted to the company on hand.
Technically also you should write cover letters for every position, but again because I’m so fucking done with this bullshit, I rarely do. If I’m feeling like doing a half-measure, this is actually an excellent opportunity to use ChatGPT or an open-source LLM to write for you, of course with proofreading, because this is an application where a bullshit machine IS FUCKING DESERVED actually works since they socially expect bullshit. Not like they’re reading it anyways.
I’m “pro-work,” if anything. I want a career.
Can I be honest? I desperately want to work too, but I’m slowly coming to the conclusion that it’s literally easier not to fucking bother and just live off the government, parents, rich friends, and/or stealing. I’m actually a lot worse off than I used to be before studying engineering. I’m overqualified for my old job, but underqualified for engineering and tech work, and all at the price of thousands of fucking dollars of debt. Turns out capitalist “efficiency” is making it harder for us to be put to work.
Looking for work is a job in it of itself, except you don’t get paid.
I love how into this stuff you are.
Thanks, I wish people around me felt the same way 😂.
T O A N W O O D Z
So I actually found an Acoustical Society of America article on wood species for acoustic guitar by a luthier. My favorite quote was:
Provided the wood does not respond like the proverbial “piece of wet cardboard”, most luthiers can create a respectable instrument from available timber.
And tbh with enough EQ and compression before the amp I probably can get metal out of a piece of wet cardboard.
From the conclusion of the paper:
Specific woods types have specific attributes that make them best suited for making particular guitar components.
…
However, the street lore attributing specific types of sound to specific species of a genus is seldom justified.
…
Guitars designed to acoustical criteria (rather than dimensional criteria) where the effects of different stiffnesses and densities of species are minimised, sound very similar.
…
The residual differences that can be heard may be attributable to the sound spectral absorption and radiation of the particular piece of wood used, a property that is not easily measured and is poorly substituted by the occasional measurement of the damping characteristics of the wood. Once the density and Young’s modulus of particular species is accounted for by careful acoustical design the residual differences are very subtle, yet can be important enough to ensure that some luthiers continue the romantic search for that “holy grail” of woods.
I believe that some of this discussion should apply to electric guitar. However, unless you are playing basically perfectly clean electric guitar, the wood your guitar is made of is a lot less important than… everything else in the signal chain. However, since wood does affect the guitar’s sensitivity, I could see it affecting how it responds to classic amps with low (relative to modern amps) distortion generated by few gain stages and less filtering, i.e. the playstyle employed by those guitar forum people. However, a much larger factor in your guitar’s sound is…big surprise…all the other choices the luthier made when designing and fabricating your guitar, as well as your pickups and the signal chain you use after the signal leaves the guitar.
Also since we’re metal players and we’re absolutely destroying the original signal, the type of wood only makes a difference for structural reasons (i.e., not going out of tune, exploding under the pressure, etc.), which can similarly be accounted for by a competent luthier. For example, all of my guitars are uber-cheap, and their necks can be very easily pulled out of tune, because they were not built by competent luthiers. Consequently, the few times I did play live shows, I had to be very careful on stage to not “do stuff my guitar doesn’t like” so it didn’t go out of tune by the end of the song. Good times…
Creambacks
So I found a video where Creambacks get compared to a V30. IMO based on that video and forum posts, I would consider a Creamback H-75 over the H-65 or the Neo. H-65 sounded too dark to stand out in a mix, and the Neo sounded like bees and basically nothing like the other two. (If my guitar sounds like bees, I want it to be an effect I can turn off.) However, take it with a grain of salt since mic positions were not the same for each speaker. But also, it depends on your primary use case (recording, bedroom play, playing shows).
Although honestly, I think 99% of guitar players would get a lot farther investing in a PC with a decent CPU + a decent USB audio interface than buying actual physical amplifiers unless they need to amplify an actual venue [1]. You’d get better sound, more controllable sounds [2], easier recording, and more possibilities by going digital. Also, if you can send guitar into your computer (or run the Effects Send to your interface to test it with your real amp), it would be cheaper to pick up an impulse response of the speaker before committing to buying one. (An impulse response captures the “character” of a speaker + cabinet + power amp assuming it is a linear system. It is a very good approximation, nearly indistinguishable from the real thing. For example, I recorded several IRs of my Vintage 30 and a couple other speakers in my cabinet.)
[1] Technically you need plugins and DAW software too, but you can 100% use a combination of stock plugins and freeware and get excellent results with practice. The Ardour DAW is free and open-source (but they do charge for pre-compiled Linux binaries, but Linux package managers typically have a version ready-to-go for free), although REAPER is better IMO (not simple, but extremely customizable and stable) and has an infinite, unlimited free trial (and runs on Linux).
[2] For example, the “clean” channel on the 6505 absolutely sucks, except (ironically) as a rhythm metal channel. If I needed to use both clean and distorted sounds, I would have to use a second amp and an A-B switch. In software, it is absolutely trivial to automate the switch between two (or more) amps (or effects, or whole signal chains). ReaGate, a freeware noise gate plugin that comes with REAPER but anyone can get, includes an adjustable pre-filter so that it only responds to the frequency ranges you expect your guitar to “live in”. It also has a side chain input, meaning you can gate the output signal based on the signal that goes in before the amplifier, like the “four-wire” noise gate setup in an amplifier’s FX loop. This setup means that the amplifiers won’t distort the signal as the gate transitions from on to off, and it also can take care of noise due solely to the distortion stages.
That was more signal chain theory that I’ve ever read in one sitting.
Sorry 😂. Digital signal processing is one of my special interests so I typically go overboard with it.
- Source of distortion doesn’t really matter, it’s filters
- TS808 cleans shit up, no tubes. Tubes not necessary and probably dont do much in a pedal anyway
Yep that’s it.
- What are you running now for amps and such got get away from the 5150/TS808 combo?
In the “before times”, I used a TBX150 solid state amp alone, and a Peavey 6505, mostly for recording. The TBX150 is a great amp for modern death metal, but it has a parametric EQ. For me, that’s great, but a lot of guitarists don’t like the metal zone because it has a parametric EQ. For both, I plugged them into a cheap birch Seismic cabinet with a Vintage 30 speaker harvested from a Recto cab. Honestly, the biggest factor in the quality of my guitar recordings was switching to that speaker.
At this very moment, since my grandmother moved in, I’ve had to forgo amps altogether for simulators. I actually use either a 6505 simulator (Nick Crowe 8505) or a Fender Frontman (yes, that amp, specifically the AXP Softamp plugin) with the mids cranked up and the cabinet impulse thrown out and replaced with a set of impulses I recorded myself from the previously mentioned cabinet.
The best results I’ve gotten have been with using an EQ before a Boss HM-2 (Buzz Helvetes) set to… however much “HM-2-ness” you want in the EQ depending on what you’re playing, and the smallest possible offset from zero distortion. The pre-EQ is typically a bandpass so I can get more “grinding” and as a cheat for not changing my strings. But, it doesn’t really change the “overall” frequency response of the output of the HM-2, just how the HM-2 “sees” your guitar, so you still get its nastyness. Then the majority of the gain comes from the amp.
If an HM-2 or Metal Zone is too much, I’ve gotten really “smooth” results with using the ProCo Rat as an overdrive. Note that on a ProCo Rat, the Filter (tone) knob “is backwards”; all the way to the left = minimal filtering.
My inspiration for this is really the fact that old school death metal was recorded on shitty gear compared to what is available today, and that some of the magic lies in the fact that it sucks in just the right way. Besides At the Gates who used two shitty pedals, Chuck Schuldinger from Death used a shitty Valvestate and got great results. Most of the old school death metal bands were using Valvestates.
In the past few months, I’ve been experimenting with using the RS-MET tool chain plugin to generate nasty sounding distortion with odd-order Chebyshev polynomials. It initially sounds like a more unhinged Boss HM-2 with no pre or post eq, but since the plugin lets you input the math you want to do, it’s much more controllable. If you use this plugin, you gotta make sure to set the built-in filters to cut off high frequencies that will be aliased, or turn on oversampling, or both. This is included within the plugin, but you have to actually set it. Otherwise, everything just sounds like aliasing, although that’s pretty gnarly too.
So the short answer is: switch out a tube screamer for some garbage piece of gear, preferably something with a frequency response (loosely “tone”) you like and a “bold” distortion. Then, set the pedal so it is giving the least amount of gain while still exhibiting its nonlinearity (minimum possible distortion), then set the amplifier to give you the rest of your gain and cut through the mix. I cannot stress enough that for metal guitars, particularly recording guitars, you gotta set your knobs so that it sounds good in the mix. If it sounds perfect in the room without the rest of the band, I guarantee you it will sound muddy in the mix.
atmospheric black metal rabbit hole
Fuck I love me some atmoblack. It’s honestly my biggest inspiration as a musician. But I’ve been jamming tech death lately because it helps me study. I gotta recommend Mare Cognitum for a heavier atmoblack sound, and Trna for an instrumental post-black metal expanse.
That’s wild about the tube screamer, give the chubbies pedal heads get over them.
I keep a Tube Screamer clone in my arsenal precisely because it’s not a very “tube-like” sound. It is midsy and crisp. Like 95% of all metal recordings are done with a Tube Screamer, 5150, and Vintage 30s, or digital clones thereof. It’s a dream to play, but that’s kinda why I’m trying to move away from it as a guitarist. But as a producer, it’s a really nice tool for the toolkit. Also if I was playing tech death or something where I’m at the limit of my skill level, I’d probably rock a Tube Screamer so stuff is easier to play.
Also though, I believe that historically it was marketed as “tube sound without the tubes” early in its life. Which, compared to some of the alternatives available at the time, did come a little closer to tube amps’ softer clipping.
But IMO as a metal player and electrical engineer, I think that whether or not your distortion is generated by tube, transistor, or simulation isn’t that important [1] compared to properly tailored filters at every stage, especially the “tone stack”, at least not for metal players or people who play with “fully saturated” distortion. For this reason, I’m absolutely not afraid to use solid state amps if they sound good, and for metal they absolutely can sound better in the mix.
And even though tube distortion pedals do literally have a tube in the signal chain, they are probably not being run at a high enough voltage to actually be the source of distortion. You’d have to check the schematic to be sure. So there’s no benefit to getting a tube distortion pedal in general, and tubes have microphonics and just electrically kinda suck. Tube amps are great, but again, it’s more because of the various filters and the fact that the saturating nonlinearity exists at all rather than that the nonlinearity is generated by a specific device, so long as you use fresh tubes because old tubes do deteriorate an amplifier’s performance. But also, tube amps are “warm” without any further filtering, and I typically find that “warm” amps have trouble standing out in a metal mix. Hence why when I (and other metal players) pick tube amps, I pretty much exclusively use amplifiers (and their simulations) that filter out that warmness, which shows up in metal as muddy garbage.
[1] Assuming you’re using monotonic distortion characteristics like soft and hard clipping, power laws, exponential. Non-monotonic distortion characteristics (like a sine or Chebyshev polynomial) sound whack and I really wish that more metal guitarists would check them out.
From Wikipedia:
A fatwa is a legal ruling on a point of Islamic law (sharia) given by a qualified Islamic jurist (faqih) in response to a question posed by a private individual, judge or government.
So unless you live in a country that runs on Sharia law and have your real information connected to your account, you can safely ignore it.
You could report the poster for being mean if it goes against that community’s or instance’s rules. Up to you. But I think someone’s just being a troll.
What do you do for work, or what are you studying towards
Studying towards masters in electrical engineering.
Musical recommendations (bonus points for metal)
If you like the song by Beyond Creation, you’ll love this.
Useless tidbit you know (bonus points for citing sources)
The Ibanez Tube Screamer pedal contains no tubes.
Best meal you’ve had
I have a sous vide machine. I’ve obviously done the whole “food porn steak” thing, but one time I took an entire chicken breast, sous vided it like a steak, breaded it, pan fried it, and ate it like a massive chicken nugget with a juicy interior.
Best place you’ve visited
Home lmao. Close second would be Cedar Point, although I don’t think I can fit on roller coasters at my current weight.
To those on the other side, they think we mean “Get rid of all police, zero funding, go away”
It was literally “abolish the police”, but the shitlibs watered it down to nothing as usual.
I’m funny
No
Holy shit I’m so old 😭