Poll: What samplerate do you use with jack?

Optimize your system for ultimate performance.

Moderators: MattKingUSA, khz

What samplerate do you use with jack?

Poll ended at Mon Jul 30, 2018 11:45 am

44100
12
34%
48000
21
60%
88200
0
No votes
96000
1
3%
other
1
3%
 
Total votes: 35

User avatar
lilith
Established Member
Posts: 1698
Joined: Fri May 27, 2016 11:41 pm
Location: bLACK fOREST
Has thanked: 117 times
Been thanked: 57 times
Contact:

Re: Poll: What samplerate do you use with jack?

Post by lilith »

Lyberta
Established Member
Posts: 681
Joined: Sat Nov 01, 2014 8:15 pm
Location: The Internet
Been thanked: 1 time

Re: Poll: What samplerate do you use with jack?

Post by Lyberta »

Doesn't open with Tor.
tramp
Established Member
Posts: 2347
Joined: Mon Jul 01, 2013 8:13 am
Has thanked: 9 times
Been thanked: 466 times

Re: Poll: What samplerate do you use with jack?

Post by tramp »

42low wrote:But i'll bend and shut up. You win! (Do you?)
Your tendency to take anything personally is really annoying. :?
On the road again.
User avatar
khz
Established Member
Posts: 1648
Joined: Thu Apr 17, 2008 6:29 am
Location: German
Has thanked: 42 times
Been thanked: 92 times

Re: Poll: What samplerate do you use with jack?

Post by khz »

I take 48 KHZ ((khz) ;-) ) because of ADAT.
A 44.1 KHZ divisible value would be ideal.
This means producing with double resolution - with 88.2 KHZ (176.4 KHZ ;-)).
This allows you to loss-free downsample to 44.1 KHZ for publication.
. . . FZ - Does humor belongs in Music?
. . GNU/LINUX@AUDIO ~ /Wiki $ Howto.Info && GNU/Linux Debian installing >> Linux Audio Workstation LAW
  • I don't care about the freedom of speech because I have nothing to say.
CrocoDuck
Established Member
Posts: 1133
Joined: Sat May 05, 2012 6:12 pm
Been thanked: 17 times

Re: Poll: What samplerate do you use with jack?

Post by CrocoDuck »

I don't normally like to participate in out of topic sub-discussions. However, I think I would share information that I know on the topic. I hope that this can help ending the sub-discussion and bring the topic back on its rails. I don't claim to be the supreme authority on the field, but I have a degree in Physics and a degree in Acoustics, which should help me in perhaps avoiding too incorrect affirmations (unless my degrees were a waste of money...). Perhaps, if we want to discuss this further, we should open another thread.
Lyberta wrote:
Doesn't open with Tor.
It is a paper by several Japanese neuroscientist. I will report the synopsis here, I think it doesn't breach any license:
Oohashi, Tsutomu, Emi Nishina, Manabu Honda, Yoshiharu Yonekura, Yoshitaka Fuwamoto, Norie Kawai, Tadao Maekawa, Satoshi Nakamura, Hidenao Fukuyama, and Hiroshi Shibasaki.

Inaudible high-frequency sounds affect brain activity: hypersonic effect.

J Neurophysiol 83: 3548 –3558, 2000.

Although it is generally accepted that humans cannot perceive sounds in the frequency range above 20 kHz, the question of whether the existence of such “inaudible” high-frequency components may affect the acoustic perception of audible sounds remains unanswered. In this study, we used noninvasive physiological measurements of brain responses to provide evidence that sounds containing high-frequency components (HFCs) above the audible range significantly affect the brain activity of listeners. We used the gamelan music of Bali, which is extremely rich in HFCs with a nonstationary structure, as a natural sound source, dividing it into two components: an audible low-frequency component (LFC) below 22 kHz and an HFC above 22 kHz. Brain electrical activity and regional cerebral blood flow (rCBF) were measured as markers of neuronal activity while subjects were exposed to sounds with various combinations of LFCs and HFCs. None of the subjects recognized the HFC as sound when it was presented alone. Nevertheless, the power spectra of the alpha frequency range of the spontaneous electroencephalogram (alpha-EEG) recorded from the occipital region increased with statistical significance when the subjects were exposed to sound containing both an HFC and an LFC, compared with an otherwise identical sound from which the HFC was removed (i.e., LFC alone). In contrast, compared with the baseline, no enhancement of alpha-EEG was evident when either an HFC or an LFC was presented separately. Positron emission tomography measurements revealed that, when an HFC and an LFC were presented together, the rCBF in the brain stem and the left thalamus increased significantly compared with a sound lacking the HFC above 22 kHz but that was otherwise identical. Simultaneous EEG measurements showed that the power of occipital alpha-EEGs correlated significantly with the rCBF in the left thalamus. Psychological evaluation indicated that the subjects felt the sound containing an HFC to be more pleasant than the same sound lacking an HFC. These results suggest the existence of a previously unrecognized response to complex sound containing particular types of high frequencies above the audible range. We term this phenomenon the “hypersonic effect.”
I linked to this paper too in another thread which I now cannot dig out. I think that to make sense of this we ought to first consider how we perceive infrasonic sound first (frequency less than 20 Hz), as it could be very similar in principles. Also, there is plenty of more evidence on the effects of exposure to to infrasonic sounds, as a lot stems from studies in the noise pollution of industry and power plants. Adam Neely made a very good video essay on the topic: https://www.youtube.com/watch?v=tMSXdCWbRHw

We don't hear infrasonic sounds with ears. Or better, it is extremely hard to do. Here, look at the equal loudness contours:

Image

If you want to hear a 20 Hz tone with the same loudness of a 1 kHz tone, then the 20 Hz tone, depending on the 1 kHz level we are comparing to, might even need to be 70 dB higher. This is -insane-. Just to give you a reference, the level gap between a very very quiet room and the threshold of pain is around 80 dB.

However, our body cavities and organs, and also eyes, resonate at very low frequency. These vibrations can produce in theory vibration of the cochlea, but they end actually producing other kinds of sensations. That is, we don't perceive infrasonic sound from the ears directly, but through some sort of "touch" given by our internal organs vibrations. Some studies found correlations with feeling of paranoia and depression, which is why many studies are being done, for example, on the noise of wind turbines, that emit a lot of infrasonic noise pollution. A search in literature finds many results:

https://scholar.google.co.uk/scholar?hl ... al+effects
https://scholar.google.co.uk/scholar?hl ... ines&btnG=

Now, the research from these Japanese guys seems to suggest that we do not perceive at all hypersonic sound (>20 kHz) with the ears. I personally cannot ear anything from my laptop headphone out beyond 16 kHz no matter how loud I play it. However, they found evidence of different brain activity when subjects were exposed to hypersonic sound, which could mean that through some biologic effect we can perceive it. I didn't read the paper in full (yet), so I don't know whether they have idea of what effect is at the base of the perception.

So, to wrap this up, the ear itself is indeed limited to 20 Hz - 20 kHz, but every perception for humans stems from integrating together different stimuli, from different sources. So acoustic fields with not directly audible frequency components can indeed have a biological effect, and produce perception, through other mechanisms.

So, what this means? Should we all record 768 kHz? I don't think so.

First of all, if you have a loudspeaker, I would recommend to download this little freeware program and try to simulate it as best as you can. It works OK with wine. For an example, let's look at the default speaker response, a common bass reflex design, which I here represent up to 96 Khz, the maximum frequency we can stream when sampling at 192 kHz:

Image

So, we see that normally speakers do not have a flat response. I will not go into the details of why that happens, but the same holds for professional speakers. It is due to Physics, which is why in professional studios the response of the room is matched to that of the speakers and vice versa. The plot shows us that speakers are essentially pass band filters. So, if we want to reproduce very low frequency well, we need a specialized speaker. If we want to reproduce high frequency well we need another specialized speakers. This is why we have woofer and twitters. This without considering pass band in the electronics. Normally, audio electronics will try to filter out at least from 1 MHz, as to avoid accidental demodulation of AM radio (or even 500 kHz), which then you would hear through your equipment. Also, one doesn't normally want to input very high frequency to a speaker. The plot above cannot represent it, as it uses a so called lumped model, but speakers become modal after a certain frequency, and the response starts having crazy peaks and valleys: not a wanted things for low distortion and high fidelity.

So what the point is? That if you want to expose yourself to the extremes of the audio spectrum normally you need to build up a chain of equipment that are well chosen to do so. When you factor standard equipment in a normal room, with normal background noise, most likely you are cutting off the extremes, or significantly distorting them. In fact, from the study from our Japanese friends we read:
Most of the conventional audio systems that have been used to present sound for determining sound quality were found to be unsuitable for this particular study. In the conventional systems, sounds containing HFCs are presented as unfiltered source signals through an all-pass circuit and sounds without HFCs are produced by passing the source signals through a low-pass filter (Muraoka et al. 1978; Plenge et al. 1979). Thus the audible low-frequency components (LFCs) are presented through different pathways that may have different transmission characteristics, including frequency response and group delay. In addition, inter-modulation distortion may differentially affect LFCs. Therefore it is difficult to exclude the possibility that any observed differences between the two different sounds, those with and those without HFCs, may result from differences in the audible LFCs rather than from the existence of HFCs
In order to measure the effects of hypersonic sound, these researchers had to design a proper audio system from scratch. Please, read this carefully again:
Most of the conventional audio systems that have been used to present sound for determining sound quality were found to be unsuitable...
The authors are not referring to off the shelves audio systems, but to audio systems designed to be used in psychoacustics sound quality evaluation experiments, which adhere to very strict international standards. Even those were not appropriate for their studies.

What about headphones? Headphones too have a pass-band characteristics. They will always drop off low frequency sound to some degree which can depend to how well they seal on head. After few kHz they response becomes increasingly crazy, with peaks and valleys, resonances of all kind. I measured many models to find sharp roll offs not too higher than 20 kHz. The resonances cannot be avoided, as they are due to headphones being small cavities of air. In proximity of the peaks THD can reach 80%. I am not sure headphones can be considered high fidelity hypersonic audio systems. Not to mention, by coupling sound directly to the ears they bypass the other biologic channels by which we hear not ear-audible sounds, meaning that we would perhaps not be really exposed to any infrasonic or hypersonic effects.

So, we ought to conclude that, as long as off the shelves equipment is concerned (which I believe is what all of us normally use), highly ultrasonic sound reproduction is best avoided, as it can produce actually unwanted distortion more than desired artefacts, which can leak, through inter-modulation, in the nominal audible range too.

I hope this is not some random rambling, and that makes some sense.
AndersBiork
Established Member
Posts: 24
Joined: Tue Nov 15, 2016 12:11 pm
Has thanked: 6 times
Been thanked: 5 times

Re: Poll: What samplerate do you use with jack?

Post by AndersBiork »

What kind of microphone do you use to record those frequencies?

My mother tongue is swedish. Running KXstudio on Xubuntu 22.04. No wine. No whine. But beers.

CrocoDuck
Established Member
Posts: 1133
Joined: Sat May 05, 2012 6:12 pm
Been thanked: 17 times

Re: Poll: What samplerate do you use with jack?

Post by CrocoDuck »

AndersBiork wrote:What kind of microphone do you use to record those frequencies?
Similarity to what happens with speakers, microphones need to be designed properly to pick up hypersonic sound. However, microphones, especially condenser microphones, can be made to have flat responses over very large ranges. This microphone, for example, has bandwidth up to 140 kHz:

https://www.gras.dk/products/measuremen ... 687-46dp-1

That is measurement microphone especially designed for that. Microphones used for music most normally don't really stretch that far. See for example the frequency response of the SM58, which doesn't even reach 20 kHz:

https://www.google.com/search?tbm=isch& ... RpwUTzdcqM:
User avatar
sysrqer
Established Member
Posts: 2523
Joined: Thu Nov 14, 2013 11:47 pm
Has thanked: 320 times
Been thanked: 151 times
Contact:

Re: Poll: What samplerate do you use with jack?

Post by sysrqer »

42low wrote: At the other hand then lowcut wouldn't be needed either (but that's something everyone does, as needed, the lower frequencies than the stats give).
I think it would be needed. Even from one source/channel -10db at lower than 50hz is not insignificant, a couple of channels running with that level at that frequency range could make quite a difference.
User avatar
sysrqer
Established Member
Posts: 2523
Joined: Thu Nov 14, 2013 11:47 pm
Has thanked: 320 times
Been thanked: 151 times
Contact:

Re: Poll: What samplerate do you use with jack?

Post by sysrqer »

42low wrote:...or edits/plugins done over that sound building up out off that sound in the same way. :mrgreen:
I have no idea what you're trying to say with that.
42low wrote:Now were there!!! Eventually. OMG WE DID IT! WE REACHED IT!
Now you state that sound beyond hearing ranges DOES can have effect. What I said all the time and everyone said would be bullshit and crap!!!!
Calm down mate. Sub 50hz is very much audible and certainly impacts on mixing. Save your condescension for someone else, that was my first post in this thread so your talk of 'now' and 'all the time' and 'everyone' is not something I'm involved in. Not sure what you're trying to prove here though, this stuff is widely researched.
Lyberta
Established Member
Posts: 681
Joined: Sat Nov 01, 2014 8:15 pm
Location: The Internet
Been thanked: 1 time

Re: Poll: What samplerate do you use with jack?

Post by Lyberta »

42low wrote:I said we should use the effect from ultrasound on the hearable sound.
Say what?
42low wrote:
AndersBiork wrote:What kind of microphone do you use to record those frequencies?
Record?? :roll: I mainly speak about editing within the daw.
Say what?
Spanner
Established Member
Posts: 76
Joined: Mon Mar 10, 2014 8:18 pm
Been thanked: 2 times

Re: Poll: What samplerate do you use with jack?

Post by Spanner »

I know the poll is closed but I use 48KHz
I started using that for most stuff because one of my devices required that. I have not really found a reason to change it.
I will sometimes use other freqs for various reasons.
CrocoDuck
Established Member
Posts: 1133
Joined: Sat May 05, 2012 6:12 pm
Been thanked: 17 times

Re: Poll: What samplerate do you use with jack?

Post by CrocoDuck »

Final closing comments from me.
42low wrote:
Yeah .... "You can't hear below 20 hz...." :mrgreen: :mrgreen: :mrgreen:
"very special hardware needed..."
"Impossible outside hearing ranges...."
https://www.youtube.com/watch?v=ozyjyUjp_zM
https://www.youtube.com/watch?v=P9DqkFjIokA
A subwoofer, designed to reproduce low frequency is special hardware indeed, as "specialized for" low frequency. Special doesn't mean esotic, or hard to find, or extremely expensive. A hammer is a special tool. Specialized to hammer nails.

As for "off the shelves" hardware, you can have any speaker radiating any frequency (or any microphone capturing any frequency). However, if you do it outside the passband, the more you are far into the stopband, the more you do it with less linearity, and with more unwanted side effects, among which cone breakup at high frequency:

https://www.google.com/url?sa=t&rct=j&q ... 6NKwhQDr36

A lot about speakers, and how they work, can be found on Kippel website:

https://www.klippel.de/know-how/literature/papers.html
42low wrote:Don't underestimate hardware.
More like "know you hardware", so that you know how to use it best.
42low wrote:First you all do some real oscilloscope tests on sounds and hardware
Oscilloscopes can be useful, and I use them very often at work, most normally to study PCBAs and various electronics. However, we normally use frequency analysers as these ones to measure electroacoustic systems https://www.01db.com/our-solutions/our- ... -analyzer/. They are much more convenient, as they can directly measure the frequency response.

If any of you guys is interested in doing measurements with the same principle as those analysers do (closed loop measurements), I wrote a tutorial some time ago: viewtopic.php?f=19&t=17759
Lyberta
Established Member
Posts: 681
Joined: Sat Nov 01, 2014 8:15 pm
Location: The Internet
Been thanked: 1 time

Re: Poll: What samplerate do you use with jack?

Post by Lyberta »

I think CrocoDuck deserves some kind of metal by bringing science and truth into this topic.

@42low, I've spent years in psychiatric hospitals and talked with quite a few people with various mental disorders. And your posts show signs of one - namely, having reasoning and logic that makes no sense to other people. Unfortunately, we can't help you until you recognize that you have a problem and ask qualified people for help.
User avatar
Markus
Established Member
Posts: 81
Joined: Tue Jul 21, 2015 9:29 am

Re: Poll: What samplerate do you use with jack?

Post by Markus »

Apologies for a quite lengthy answer but this thread sucks big time. So let's get back to topic.

Disclaimer: I am not the brightest bulb in the box. Not even close. So this is my personal opinion, not more, not less. No demand for "the truth".

I'm working @ 48kHz all the time because for me it's the best compromise between CPU load, latency, project size, gear and sound quality. To be clear: no religion or believes involved. Let's work through this list. And some further thoughts.

CPU Load

I tend to have quite an amount of tracks in my productions and I like to throw a couple of effect plugins onto them. An average channel playing back a recording from a microphone, a DI-Box or some line-out of an instrument normally has an EQ and a Compressor at least. Most of the time there's some additionals depending on the source like Expander, Saturation, De-Essing, Exciter/Bass-Enhancer, DelayFX (Chorus, Phaser, Flanger..), exclusive delays, Stereo Enhancement and other stuff. So saving CPU is mandatory for my mixes. The more samples to calculate per second, the higher the need for CPU.

This is only of importance when it comes down to mixing/mastering but doesn't affect the recording. But since downsampling is a useless waste of time - why record at higher rates than doing the mix in on the same machine/in the same environment.

tl;dr Huge mixes at high sample rates eat up huge amounts of CPU, so 48k is a good compromise regarding my average amount of DSP used.

Latency

The more samples per second, the less latency. Somehow.

Sound servers used in consumer operating systems on consumer hardware like Win/OSX/Linux (which includes every software-driven DAW in this world however the product-managers call it "professional") are set to a block size and an amount of blocks for buffering. Block size means samples per round trip. The analog input is sampled in realtime on the audio hardware but because of the architecture of a personal computer only chunks of samples are sent over the (whatever-)bus to the CPU on IRQ. If there are more samples per second, one chunk of samples sent to the CPU means less time so the latency for a full round-trip of the system goes down. This only works as long as the hardware is able to keep up with the amount of data without starting to respond with clicks and crackles.

Since I like to have some effects while recording (e.g. some EQ or reverb) my system has to cope with both - least possible latency and some CPU cycles left for calculating DSP without producing buffer over- or underruns (which would ruin the recording if those would appear before the sound hits the CPU to be sent to the HDD).

BTW: This was not of a problem back in the days when there were harddisk recorders which had zero latency because a HDD was attached directly via SCSI bus and the DAW running on the computer was only used as a GUI. I used to use two Yamaha CBX-D5 together with a dedicated Cubase 2(?) - which means 2 ins and 4 outs per device for some 2000 bucks each :eek:

tl;dr 48k is a good compromise for my productions between latency and effects used while recording without running into buffer over-/underruns.

Project Size

Just some minor calculations. Let's assume an average fictional pop/rock demo production of 4 minutes (240 seconds) with a trained and consolidated band.

Drummer - 12 tracks:

Bassdrum1
Bassdrum2
Snare1
Snare2
HiHat
OH left
OH right
Ride
Tom hi
Tom lo
Floor tom
FX (SideSnare, Cowbell, whatever)

The drummer needs 5 full takes, 240 seconds times 12 tracks each:
14400 seconds

2 Guitarists - 4 tracks:

Close Mic L
Room Mic L
Close Mic R
Room Mic R

They perform better than the drummer, so they need only 4 full takes each:
3840 seconds

Bass player - 2 tracks:

DI-Box
Close Mic

This guy sucks like the drummer, resulting in 5 full takes:
2400 seconds

Keyboards - 3 stereo tracks:

Organ L
Organ R
Pad Synth L
Pad Synth R
FX L
FX R

Organ and pad take 2 full takes each (yes, he's the hero of the combo :)), FX are recorded in pieces, let's assume a single full take for all FX snippets:
2400 seconds

Percussions - 4 tracks:

Conga L
Conga R
Shakers
FX

This guy needs 3 full conga takes and 2 full takes for FX and shakers:
2400 seconds

Lead vocals - 1 track:

The singer needs two hours, generating 10 full takes.
2400 seconds

Backing vocals - 3 tracks:

The one that can sing
The one that looks good
The one that rolls the biggest blunts

The first one takes 2 full takes (we can copy stuff from choruses, so don't worry ;)), the latter ones need 5 takes each:
2880 seconds

Let's assume brasses aren't needed in this particular production. This means:

14400 Drums
3840 Guitars
2400 Bass
2400 Keyboards
2400 Percussions
2400 Lead
2880 Backings
___________________
30720 seconds

30720 s * 32 bit * 48000 smpl = 47185920000 bit = 5898240000 byte = 5.5 GB
30720 s * 32 bit * 96000 smpl = 94371840000 bit = 11796480000 byte = 11 GB

This counts for a overdubbed recording. Recording this band all together means to double or triple the amount of data because of constant fuck-ups of single musicians.

tl;dr Running a recording studio for nearly two decades meant to deal with a huge amount of data - which got surprisingly affordable lately.

Gear

We could talk about buying the latest, most hip and overly expensive audio sh*t for your home studio, but that's not my point here.

Although it doesn't make any kind of sense to have the most HQ shtuff at home, since if you really go famous with your productions (bwahah) it will in any case be re-recorded, re-produced or re-mastered at least in a "professional" recording studio. I would never spend a shitload of bucks for gear to let some ideas materialize, played to a couple of friends or streamed over the interflubes afterwards.

I'd like to talk about the reality how people are consuming audio instead. This means DVD, iPods, phones, car hi-fi, boom boxes, cheap bluetooth speakers, cheaper headphones, speakers in the ceiling of a mall, Blue-Rays, laptops, TVs, whatever. Quality-wise all of this stuff sucks tremendously. How many people have got the real high end gear at home - and how often do they use it? And does anyone of us over here really expect to get anywhere near these devices with our bleeps and buzzes? This hardware was built to reproduce the latest Baremboim recordings or some utterly famous jazz formations like Vandermark 5 and the like. They don't even play shitty CDs or anything else digital - these guys spend hilarious amounts of money for having the most HQ vinyl productions in their collection. Nobody else using average gear will hear a noticeable difference between a 44.1k and a 96k production while consuming whatever kind of audio. OK, maybe except some very few people having some high end studio speakers and an appropriate d/a converter and amplifier at home.

There was someone in this thread stating that he doesn't hear anything above 16k on his notebook whatever gain he throws onto it. This is due to the average consumer audio gear we all have to deal with. A low pass filter has to be put on any kind of digital signal before being brought back into the analog world. If not, artifacs depending on the nyquist frequency could appear. To avoid this, all converters attach this kind of filtering. The cheaper the hardware, the worse the filter. Hearing nothing above 16k is quite usual for the vast amount of hardware out there since HQ filters cost HQ money.

Another big point is: we're talking digital. Music isn't streamed, stored or transmitted as real PCM (after it left the studio), it's even hard to get your desired stuff as (losslessly compressed) FLAC. The reality is: every kind of digital audio went through some kind of audio codec. All those codecs aren't even close to lossless. Every broadcasted signal, every audio stream over the web, every bluetooth connection, every DVD, MP3, OGG, every kind of digital audio signal was compressed beforehand because of bandwith or storage capacity. Please take a closer look at how compression works - it always means to strip unneeded high frequencies since this contains the most amount of information while offering the least amount of impression. Again: all higher frequencies are stripped before the rest of it gets crippled by how codecs work (take a closer look to even HQ JPEG images, they all introduce artifacts on high frequencies which correllates to high contrast pixels - or high frequencies in the audio world). And it even gets worse, since audio is re-coded and re-sampled multiple times until it reaches our ears in whatever environment we're listening to.

tl;dr The gear people are listening to whatever audio signals with doesn't come close to reproduce the very fine-grained differences between e.g. 48k and 96k productions.

Sound Quality

Huge topic. I will not touch it because this is religion. Some people listening to the recordings of The Beatles consider this as "high end" while others are discussing the mathematical differences between 24 and 32 bit calculations on ARM vs. i386 processors. So there's no point in discussing this topic. But to give an impression: I honestly don't hear any difference between 48 and 96kHz on my (passable expensive) gear. Noone in my whole social environment has more accurate and measured gear. Noone of the few people listening to my unprofessional shit could even tell if it was recorded analog or digital. So why should I care?

Ultra Sonic

There was some talking about ultra sonic (>20kHz).

Yes, there are people who get some kind of impression from ultra sonic waves. Most of them are < 1yr old, those guys can hear up to 30k. we'll come back to it later. A tiny amount of adults can barely get some impression above 16-18kHz but far from > 20kHz still. But. There is a use of audio signals in this band in professional audio, mostly in broadcasting side-banded information. Just one example: Detecting the status of independent speaker systems in large installations like airports or train stations. They all send some HF encoded audio signals which are then received by special hardware to determine the status of dozens of devices at the same time. That's for instance what you need tweeters > 25kHz for.

All in all turns out that ultra sonic might be a real problem. Please consult the work of Timothy Leighton who investigated the correlation between ultra sonic and health issues like tinnitus, migraine, concentration disorders and much more.

Someone was like "that is natural hearing so don't cut it off". Honestly, I don't think it's true. Ultra sonic is very rare in nature. Test by yourself - there's an app out there showing the levels of ultra sonic audio surrounding the phone. There's none except in some urban or industrial areas. But nowhere in the woods or anywhere else in nature (Okay, when the bats are invading our garden at sundown there's some ultra sonic around. But that's it basically). Additionally - it was about producing sounds with a DAW - if you do something without having any chance of control or supervision - don't do it. There's no plug-in I know about being able to control anything above 20kHz, your other gear will most likely not be able to reproduce it and you are biologically unable to reliably recognize anything of it - this is the best way to fuck up things big time.

And somebody was like "If I set an effect to 47.9kHz or even 48.1kHz" - could someone please point me to any kind of studio gear (digital, analog, software, hardware, whatever) which is capable of doing so?

Back to the babies - the findings are heading in a direction that these tiny fellows suffer a lot from all those ultra sonic imaging done these days. And - as it seems - so do we.

tl;dr Ultra sonic is a bad idea in audio production in my opinion.

Pro Audio

There was some talking about pro audio and the professionals saying this and that. Please do not consider recording studios as pro audio. It is not. It is as close to consumer electronics/software as a regular rallye car to a common car - compared to a F1 bolide. Pro audio doesn't deal with DAW, operating systems like Win/OSX/Linux, plug-ins and stuff. Pro audio is operas, sports events, malls, cruise ships, huge live concerts, broadcasters, film sets, airports, real-time translation, intercom and the like. They face the real challenges bringing realtime audio to the ISS and back for a live concert, amusing hundrets of thousands of people at the same time with 5 drug victims jumping around a huge stage (and the next day on another continent), letting everybody in any kind of facility know about the parking offender blocking the ambulance, offering communication between hundrets of people in real time over the air without any interference or crosstalk, exchanging hundrets of audio/video channels over satellite link with a cruise ship at the north pole, audibly following the action on a soccer field on a TV in real time on the other side of the globe and stuff like that. Recording studios is the smallest and least professional niche of the pro audio market.

However. Let's talk about how (and why) "pro audio" is considered leading the discussion.

I guess what is really underestimated in the discussion is the mechanics of capitalism. From my insight in the market the question is: who leads the development? And this clearly is a) broadcast but more specific b) PSB. The reason is simple: anyone relies on them and they have the biggest bucket. And not only that - their bucket fills up automatically without lifting a finger. They are throwing around billions per year. Especially the german PBS, namely ARD, leads the worldwide discussion of what is the next standard for whatever technology. It's all about sports, concerts, movies, advertising and the like. And we all know how "demand" works - in times of saturation demand has to be made up by companies under the pressure of growth. And public buckets are the best ones to be milked because nobody asks stupid questions about sense of purpose. It's the hottest shit on the market right now? Go and get it. And anyone else dependent on those institutions has to be able to keep up. Again, we're not talking about VST plug-ins or 8-channel Focusrite converters. we're talking about standards. And as soon as a standard is set, everyone with the self-understanding of "professional" has to follow the herd. If it makes sense or not isn't the question anymore.

Try to ask the real professionals "why?". I mean people like head of R&D of [huge company in JP]. His answer is "because.". No joke.

Okay, last point:

The "Professionals"

They said this, they said that. They even contradict themselves in the same sentence. If you have 5 audio engineers, you'll end up with having 12 different opinions (btw I'm one of those with 3 opinions in this peer group). Do I care? No. There's 10 studies saying this and another 10 saying that. It's a matter of formulating a question, setting up an environment (every experiment is stripped down to its core to get any kind of usable result), selecting experimentees and much more so that only meta studies could probably make a point. But in fact those are even worse because of bias. So don't take all of this too serious, it's just a matter of opinion.

And fun.

Oh, did I say fun? Let's head to the basement where my drumset...

*bumm*tschack*bumm*bumm*tschack*
Last edited by Markus on Thu Aug 02, 2018 9:49 am, edited 2 times in total.
User avatar
protozone
Established Member
Posts: 181
Joined: Tue May 08, 2018 9:02 pm
Contact:

48 kHz for the win.

Post by protozone »

Markus, I very much appreciate you bringing up a portion of very real reality, the quantitative time it takes to record multiple takes whether overdubbing or not.

I work alone, and this is very much an issue for me. Even though most of my tunes are only 3 to 4 minutes long, tops, ready for traditional radio play, and a few at 5 minutes, each tune is very labor intensive and for some of the successful tunes, I've spent TONS of hours on the same single tune. It's a small miracle that I'm not sick of every tune I ever complete after having heard it in several similar permutations long before it's even completed. And then, after it's completed, I have to review it several times to check for errors and then to fix them.

I think this math is often overlooked and forgotten by non-musicians and/or critics.
Part of the reason why I don't archive my stems (multitrack projects) is so that I don't go literally insane (re-)working minutia of a given tune. By deleting the stems when the tune is pretty much DONE, it puts a barrier between me and anybody else trying to change the tune into something other than what it has become thus far after several hours/days/weeks of work.

Often I will still add to a tune nonetheless, but at that point, it's like working with a stable subgroup and adding just maybe one more isolated overdub.
Thank goodness for floating point lossless audio, so I'm not dealing with generation loss at that point.

I do happen to believe that there can be something similar to generation loss if too many playback and codec systems are plagued with noise and harmonic and inharmonic distortion. A lot of bad audio attributes are additive literally. They add up together and can become badly audible.

So I like to keep it all pristine.
The biggest threat to my DAW work is being hacked and having somebody downgrade my audio while I'm still working on a project.
This happened to me at least once, and it made me VERY VERY ANGRY. I threw a temper tantrum. Somebody had put a horrible low pass filter on my completed audio tracks. It was painfully obvious to me. I got really ticked off because I like crisp and clear high end and full bass at the same time.

This was back in the days when I was living in one of many risky neighborhoods. I also found a fingerprint on one of my DVD archives.
I DON'T HOLD DVD'S NOR CD'S THAT WAY. SO I DON'T GET FINGERPRINTS ON THEM. I ought to have duplicated the fingerprint and submitted it to Law Enforcement, but I was in a forgiving mood at the time. I didn't want them coming back to "clean up their own mess", so I wiped off most surfaces in the room of fingerprints and carefully cleaned off the DVD disc and wiped my system of the infiltrationware and started from scratch right before I moved out.

Anyways, back to life, back to reality, back to the here and now, yeah.
I also use 48 kHz most of the time. I agree it's a good compromise.
Occasionally I dip back into 44.1 kHz because some of my favorite old classic softwares can't quite handle 48 kHz because back then it was too "new". But usually 48 kHz is not a problem at all. A long time ago I tried 96 kHz audio and I couldn't tell the difference in my own home listening tests of my own material with myself.

I am able to hear the differences between WAV and M4A or WAV and MP3 or WAV and OGG or WAV and ATRAC at times if it's tunes that I composed.
Often it's overlooked that lossy perceptual coding is loosely designed for loudspeaker situations where bass localization supposedly doesn't matter. But I find that some of the highest quality audio has nice stereo imaging of the low-mids and even some bass. Lossy codecs just don't represent that stuff well. And I routinely have issues with highhats and cymbals being turned into static or "flanged slush" when played back as MPEG audio or similar.

I can't tell the difference usually between 16-bit and 24-bit either usually, except for when the DAC hardwares are quieter. But even that's not a modern issue anylonger. Thank goodness for high-bit resolution audio. I used to cope with noise on high bias metal cassettes and I was proud of that, but it was tricky.

I will eventually make videos for my tunes, so 48 kHz is already there.
It's just kinda funny that I don't much use JACK.
Locked