Go-DSP-Guitar multichannel multi-effects processor
Moderators: raboof, MattKingUSA, khz
-
- Established Member
- Posts: 8
- Joined: Tue Jan 28, 2020 11:45 am
Go-DSP-Guitar multichannel multi-effects processor
Hi,
I stumbled across a new guitar multichannel multi-effects processor. As it is quite simple I tend to use it quite often. As I am a beginner I would like to now your thoughts about it. I am not involved in the project.
https://github.com/andrepxx/go-dsp-guitar
I stumbled across a new guitar multichannel multi-effects processor. As it is quite simple I tend to use it quite often. As I am a beginner I would like to now your thoughts about it. I am not involved in the project.
https://github.com/andrepxx/go-dsp-guitar
DAW: Ardour
Guitars: Acoustic Yamaha CPX900; Ibanez RG 370DX
Audio: Focusrite Scarlett Solo Studio
Keyboard: Arturia Analog Factory Expression
Midi Pad: Akai Professional MPD32
Roland Cube 40GX & Footswitch
Guitars: Acoustic Yamaha CPX900; Ibanez RG 370DX
Audio: Focusrite Scarlett Solo Studio
Keyboard: Arturia Analog Factory Expression
Midi Pad: Akai Professional MPD32
Roland Cube 40GX & Footswitch
- SpotlightKid
- Established Member
- Posts: 260
- Joined: Sun Jul 02, 2017 1:24 pm
- Has thanked: 57 times
- Been thanked: 61 times
Re: Go-DSP-Guitar multichannel multi-effects processor
Does real-time audio with Go even work (glitch-free)? Doesn't its garbage collector get in the way?
-
- Established Member
- Posts: 8
- Joined: Tue Jan 28, 2020 11:45 am
Re: Go-DSP-Guitar multichannel multi-effects processor
Hi SpotlightKid,
I would not be able to tell you. I am free of any musical talent. So even listening to notations I struggle to tell you about the correct notes. That's why I am asking about the quality.
I would not be able to tell you. I am free of any musical talent. So even listening to notations I struggle to tell you about the correct notes. That's why I am asking about the quality.
DAW: Ardour
Guitars: Acoustic Yamaha CPX900; Ibanez RG 370DX
Audio: Focusrite Scarlett Solo Studio
Keyboard: Arturia Analog Factory Expression
Midi Pad: Akai Professional MPD32
Roland Cube 40GX & Footswitch
Guitars: Acoustic Yamaha CPX900; Ibanez RG 370DX
Audio: Focusrite Scarlett Solo Studio
Keyboard: Arturia Analog Factory Expression
Midi Pad: Akai Professional MPD32
Roland Cube 40GX & Footswitch
- sadko4u
- Established Member
- Posts: 989
- Joined: Mon Sep 28, 2015 9:03 pm
- Has thanked: 2 times
- Been thanked: 361 times
Re: Go-DSP-Guitar multichannel multi-effects processor
As an R&D project it would be OK.
But the final performance of your software will be poor.
I'm currently talking about DSP algorithms like resampling:
https://github.com/andrepxx/go-dsp-guit ... esample.go
And FFT transform:
https://github.com/andrepxx/go-dsp-guit ... fft/fft.go
And Filtering:
https://github.com/andrepxx/go-dsp-guit ... /filter.go
Everywhere I see mutex locks which are prohibited for real-time audio processing. The same can be told about garbage collector which CPU utilization behaves unpredictable respectively to the program's life time.
Poor performance -> many instances won't work well on the same machine because they'll eat all 100% CPU and ask for more.
But sound engineers LIKE many instances!
So I suppose Go is not that language that should be used if you consider the users of your software to be professionals.
But the final performance of your software will be poor.
I'm currently talking about DSP algorithms like resampling:
https://github.com/andrepxx/go-dsp-guit ... esample.go
And FFT transform:
https://github.com/andrepxx/go-dsp-guit ... fft/fft.go
And Filtering:
https://github.com/andrepxx/go-dsp-guit ... /filter.go
Everywhere I see mutex locks which are prohibited for real-time audio processing. The same can be told about garbage collector which CPU utilization behaves unpredictable respectively to the program's life time.
Poor performance -> many instances won't work well on the same machine because they'll eat all 100% CPU and ask for more.
But sound engineers LIKE many instances!
So I suppose Go is not that language that should be used if you consider the users of your software to be professionals.
LSP (Linux Studio Plugins) Developer and Maintainer.
-
- Established Member
- Posts: 8
- Joined: Tue Jan 28, 2020 11:45 am
Re: Go-DSP-Guitar multichannel multi-effects processor
Thanks for the answer and even more fore the explanation.
DAW: Ardour
Guitars: Acoustic Yamaha CPX900; Ibanez RG 370DX
Audio: Focusrite Scarlett Solo Studio
Keyboard: Arturia Analog Factory Expression
Midi Pad: Akai Professional MPD32
Roland Cube 40GX & Footswitch
Guitars: Acoustic Yamaha CPX900; Ibanez RG 370DX
Audio: Focusrite Scarlett Solo Studio
Keyboard: Arturia Analog Factory Expression
Midi Pad: Akai Professional MPD32
Roland Cube 40GX & Footswitch
Re: Go-DSP-Guitar multichannel multi-effects processor
Hi!
I just saw traffic originating from this site on my GitHub account.
I started developing "go-dsp-guitar" in October 2013, so let me quickly tell you about my motivation / design choices.
Of course, since the application is "my baby", I am heavily biased, so please take my words with a grain of salt.
- I started off with a normal DFT (with O(n^2) runtime), then implemented the original Cooley-Tukey FFT (with O(n * log(n)) runtime).
- Then I replaced the Cooley-Tukey-algorithm (which is recursive and therefore "slow", since it always builds a stack of size O(log(n)) up and down) by an iterative in-place algorithm, which uses (at least asymptotically) constant memory and will therefore run without further allocations after being executed a few times.
- All roots of unity are pre-calculated and either stored in an array (O(1) access time) for the first 16384 coefficients or in a binary search tree (O(log(n)) access time) for all further coefficients (which are almost never accessed under realistic circumstances).
- Real FFT of size 2*N vector is calculated using complex FFT of size N vector plus O(N) post-processing operations.
Please try the application out for yourself before you draw conclusions. I successfully run go-dsp-guitar on an Ubuntu 18.04 LTS machine with graphic desktop and network up, my audio interface running at 96 kHz, and I comfortably (read: without any glitches) achieve about 5 ms of latency on a system from 2011 with an Intel Core i3 2310m (2 cores, 4 threads, 2.1 GHz, no turbo-boose) mobile processor. Running this on an "isolated" machine or one without a desktop environment and controling this over the network should give you even superior performance.
Also note that go-dsp-guitar is NOT primarily about performance. There are plenty of real-time audio applications / plugins for Linux and guitar that achieve good performance. (I think about Guitarix and rakarrack.) This project is about exactness and customizability. The code originates from a "circuit simulation" approach. It "models" the way guitar amplifiers work much more closely than other "guitar plugins", of course, potentially sacrificing performance along the way.
Finally ...
Regards.
I just saw traffic originating from this site on my GitHub account.
I started developing "go-dsp-guitar" in October 2013, so let me quickly tell you about my motivation / design choices.
Of course, since the application is "my baby", I am heavily biased, so please take my words with a grain of salt.
Go features concurrent garbage-collection. The garbage collector runs in its own thread and won't interrupt execution of the "worker" threads / "stop the world" - at least not unless you REALLY run out of memory and it HAS to. Even if they occur, garbage-collection "pauses" in Go are sub-millisecond on any decent system. (There are documents about this.)Does real-time audio with Go even work (glitch-free)? Doesn't its garbage collector get in the way?
The resampling algorithm is indeed "slow", but resampling will never be done in real-time. The simulation engine always runs in lockstep with the sample clock of your audio hardware / JACK server / session. Impulse-responses are "resampled" to the sampling rate of the audio before they are applied. (This does not happen in real-time!)But the final performance of your software will be poor. I'm currently talking about DSP algorithms like resampling:
The FFT-algorithm is indeed heavily optimized.And FFT transform:
- I started off with a normal DFT (with O(n^2) runtime), then implemented the original Cooley-Tukey FFT (with O(n * log(n)) runtime).
- Then I replaced the Cooley-Tukey-algorithm (which is recursive and therefore "slow", since it always builds a stack of size O(log(n)) up and down) by an iterative in-place algorithm, which uses (at least asymptotically) constant memory and will therefore run without further allocations after being executed a few times.
- All roots of unity are pre-calculated and either stored in an array (O(1) access time) for the first 16384 coefficients or in a binary search tree (O(log(n)) access time) for all further coefficients (which are almost never accessed under realistic circumstances).
- Real FFT of size 2*N vector is calculated using complex FFT of size N vector plus O(N) post-processing operations.
That's a pretty standard OLA (overlap-add) FIR-filter implementation making use of the fact that the convolution of two functions is the inverse Fourier transform of the (Hadamard) product of their Fourier transforms. The Forward FFT of the filter impulse is, of course, pre-calculated, since it never changes, so the filtering is basically an FFT, a (vector) multiplication and an inverse FFT.And Filtering:
Locking is short and only done for communication between the control thread and the real-time (audio processing) threads. What do you mean by "prohibited"? Audio is so "slow" (it's milliseconds response in a computer that executes machine instructions on a sub-nanosecond basis) that locking for a short amount of time is okay. Your general-purpose operating-system (read: Linux) won't be "hard" real-time anyways (not even with a RT-enabled kernel). There will be interrupts all the way round, which will degrade performance much more than these short locks. I'm not saying it's ideal, but it certainly won't hurt. Benchmarks confirm this.Everywhere I see mutex locks which are prohibited for real-time audio processing.
Please try the application out for yourself before you draw conclusions. I successfully run go-dsp-guitar on an Ubuntu 18.04 LTS machine with graphic desktop and network up, my audio interface running at 96 kHz, and I comfortably (read: without any glitches) achieve about 5 ms of latency on a system from 2011 with an Intel Core i3 2310m (2 cores, 4 threads, 2.1 GHz, no turbo-boose) mobile processor. Running this on an "isolated" machine or one without a desktop environment and controling this over the network should give you even superior performance.
Also note that go-dsp-guitar is NOT primarily about performance. There are plenty of real-time audio applications / plugins for Linux and guitar that achieve good performance. (I think about Guitarix and rakarrack.) This project is about exactness and customizability. The code originates from a "circuit simulation" approach. It "models" the way guitar amplifiers work much more closely than other "guitar plugins", of course, potentially sacrificing performance along the way.
Finally ...
The application is not designed to have you running multiple instances of it on the same machine. (That won't even work because the communication between the UI and the "processing engine" occurs over a Socket, and if that is already created, a second instance of the application won't be able to spawn.) You can simply have it create more "channels" (inputs and outputs), and it will internally process them all concurrently within the same process (and therefore address space).many instances won't work well on the same machine because they'll eat all 100% CPU and ask for more.
Regards.
- sadko4u
- Established Member
- Posts: 989
- Joined: Mon Sep 28, 2015 9:03 pm
- Has thanked: 2 times
- Been thanked: 361 times
Re: Go-DSP-Guitar multichannel multi-effects processor
You think that it's heavily optimized. But actually it's not.andrepxx wrote: The FFT-algorithm is indeed heavily optimized.
Consider such FFT optimizations:
https://github.com/sadko4u/lsp-plugins/ ... utterfly.h
https://github.com/sadko4u/lsp-plugins/ ... scramble.h
You won't get such freedom with Go.
If you use large array or a binary search tree, then I have bad news for you. Complexity is not an actual performance. Between algorithm and it's execution are two bottlenecks - CPU cache and branch predictor. And if your algorithm does not work efficiently with them, you won't get good results at all.andrepxx wrote: - All roots of unity are pre-calculated and either stored in an array (O(1) access time) for the first 16384 coefficients or in a binary search tree (O(log(n)) access time) for all further coefficients (which are almost never accessed under realistic circumstances).
Overlap-add has serious drawback: it provides huge latency proportionally to the FFT frame size with constant FFT frame size.And Filtering:
That's a pretty standard OLA (overlap-add) FIR-filter implementation making use of the fact that the convolution of two functions is the inverse Fourier transform of the (Hadamard) product of their Fourier transforms. The Forward FFT of the filter impulse is, of course, pre-calculated, since it never changes, so the filtering is basically an FFT, a (vector) multiplication and an inverse FFT.
Prohibited in audio processing are:Locking is short and only done for communication between the control thread and the real-time (audio processing) threads. What do you mean by "prohibited"?
- any system calls;
- any library calls that may yield to locks/waits;
- any memory allocations/deallocations;
- use of any synchronization primitives (locking/freeing).
There is no guarantee that your software in an audio loop will wait for a long time doing the mentioned things. Especially when it is running at the stage/gig. So you should strictly avoid them.
My system uses hard realtime linux kernel. Benchmarks? Which ones?Benchmarks confirm this.
Of course, I'm not trying to blame you. Anyway, you're doing good and interesting job.
But I've mentioned some aspects that really are frightening, especially me.
LSP (Linux Studio Plugins) Developer and Maintainer.
Re: Go-DSP-Guitar multichannel multi-effects processor
Of course not, but I wouldn't even WANT them in this particular project.Consider such FFT optimizations:
You won't get such freedom with Go.
The code you've shown is (mostly) hand-written assembly for a specific architecture (x86-64) AND even instruction set extension (AVX). Apart from the obvious reason, that I would currently nowhere near able to craft such a thing (I'm not really a system-level developer - I "know" how to develop application software and I also "know" some architecture-level details, but certainly not all of them), I deliberately chose to write / use only code from a "higher-level" language since it'll be much more portable (with your approach, you're gonna have to maintain separate implementations for different processors and even instruction set extensions) and much easier to maintain (and much easier for me to write code that gets the concept across and that others could understand and potentially re-use), the latter of which is also the reason why I chose a general-purpuse language like Golang instead of something domain-specific like, say, FAUST.
In fact, one of the reasons why I even wrote go-dsp-guitar was, that I was unsatisfied with current audio applications, not really so much with performance, but rather with "the way they work" - they'd partially do pretty "ad-hoc" stuff that might "sound good", but is not physically motivated. In contrast, go-dsp-guitar will model many things much closer to the way an actual circuit works. For example, if you take a look at, for example, how the "bandpass" effects unit is implemented in go-dsp-guitar, you will see that it actually calculates the way capacitors in an RC circuit will charge / discharge when a varying external voltage / signal is applied. Of course, we should expect this approach to result in somewhat lower performance, but it's also much more "true to life".
But, well, especially since you "criticize" me for using Go. What's the alternative? Should I have implemented it in C, potentially with in-line assembly? Seriously? I mean, it's already about 20 - 30k lines when implemented in Golang. (And that's not "bloat". ) How would you even implement such a thing in a much lower-level language like C without relying on a plethora of third-party dependencies, most of which you will have no idea of how good and well-maintained they actually are, unless you manually inspect and "approve" them all, and, since we talked about garbage collection, are you actually ever gonna get it free of memory-leaks? I mean, good luck with even implementing stuff like TLS or a web server on top of little more than the POSIX socket API.
It's not that I wanted to make a point by implementing it in a rather high-level language. It's rather that I had to make some decisions. Had I decided to implement everything in hand-tuned, low-level code, I probably wouldn't have got very far.
Considering the "bad / evil locking": If you inspect the code, you will notice that locking happens mostly during certain parameter changes (including buffer size and sample rate) or rearrangement of the effects units. And in these cases, there will be a slight "discontinuity" in the audio anyways.There is no guarantee that your software in an audio loop will wait for a long time doing the mentioned things. Especially when it is running at the stage/gig.
The other thing is that the locks will never be taken for long. Re-ordering effects units, for example, swaps two pointers. The lock's not gonna be taken for a long time. I tried "spamming" the application with lots of configuration changes. Yeah, it might occasionally glitch, but it's fairly rare. The most successful way of getting it to glitch will be to "spam" the power amp simulation, since any parameter change will result in the FIR coefficients getting re-calculated. I did this asynchronously in the past, but this introduced a race-condition where, in "non-realtime" / "batch mode", you'd get the first buffer(s) dropped because the FIR coefficients weren't "ready" yet - they were being calculated concurrently to audio blocks already being processed.
The locks I use are RWLocks, so concurrent reads will not block. Only if a value is actually CHANGED will the corresponding data structure be "locked". So just leave the UI alone (and if you're gonna play the guitar live, that's what you're gonna do - you will need both hands playing, you're not gonna mess with your DSP ) and you'll be fine - at least asymptotically, after all the FFT coefficients have been calculated.
Thank you for your appreciation! You see, I've put almost seven years of development into it (and invested money into it to obtain equipment, measure impulse responses, etc.) and I'm giving it away under a very permissive license.Of course, I'm not trying to blame you. Anyway, you're doing good and interesting job.
I hope you don't take my "criticism" personally either. It's not that I feel like "fighting back". I just wanted to explain why I made certain decisions. Of course that might mean that the application is unsuitable for your needs. However, I definitely wouldn't say that the application is "unsuitable for professional use" per se. It certainly depends on who that "professional" is. I agree that you might run into problems when using it in a studio. However, I built the application for practice and for (particularly small) bands using it in a live situation - and I specifically mean the musicians themselves, not the audio engineers. It's thought to replace your amp, not your FOH mixing desk. In a live situation, my concerns would rather be about whether a laptop + audio interface is appropriate for live use in general, not because of "audio quality", but because people would step on it, trip over it, spill stuff over it and it will break before the concert even begins. Non-deterministic timing behaviour of your audio application is certainly not what's gonna break your show.
Regards and rock on! \m/
Last edited by andrepxx on Mon Feb 03, 2020 8:12 pm, edited 2 times in total.
- sadko4u
- Established Member
- Posts: 989
- Joined: Mon Sep 28, 2015 9:03 pm
- Has thanked: 2 times
- Been thanked: 361 times
Re: Go-DSP-Guitar multichannel multi-effects processor
TL;DR
I understand, you've inspired with Go (wow, that's easy to build applications, code doesn't bloat, never need shuffle bytes as in C, etc...) but I'll just leave this link here:
https://cellperformance.beyond3d.com/ar ... -lies.html
I don't want to argue with you for a long while. Will just go and write yet another function in assembly which will allow to process 8 FIR filters for the same time the one filter will be processed by native C function.
I understand, you've inspired with Go (wow, that's easy to build applications, code doesn't bloat, never need shuffle bytes as in C, etc...) but I'll just leave this link here:
https://cellperformance.beyond3d.com/ar ... -lies.html
I don't want to argue with you for a long while. Will just go and write yet another function in assembly which will allow to process 8 FIR filters for the same time the one filter will be processed by native C function.
LSP (Linux Studio Plugins) Developer and Maintainer.
Re: Go-DSP-Guitar multichannel multi-effects processor
As you may have noted, this discussion has spiked our interest, so we have taken our time to actually analyze the performance of go-dsp-guitar running in "real-time" (JACK-aware) mode.
In case some of you are interested in the actual real-time performance of go-dsp-guitar, we therefore kindly ask you to take a look at our performance analysis / discussion, where, among other things, we actually measured the impact of garbage-collection and mutex contention on the performance and "real-timeliness" of go-dsp-guitar.
https://github.com/andrepxx/go-dsp-guit ... ormance.md
We will also make use of our findings in order to further optimize the performance of the application. Some of this is already in our repository, but not yet part of an actual "release", since we are still exploring further potential optimizations.
Regards.
In case some of you are interested in the actual real-time performance of go-dsp-guitar, we therefore kindly ask you to take a look at our performance analysis / discussion, where, among other things, we actually measured the impact of garbage-collection and mutex contention on the performance and "real-timeliness" of go-dsp-guitar.
https://github.com/andrepxx/go-dsp-guit ... ormance.md
We will also make use of our findings in order to further optimize the performance of the application. Some of this is already in our repository, but not yet part of an actual "release", since we are still exploring further potential optimizations.
Regards.
-
- Established Member
- Posts: 1516
- Joined: Sun Jan 27, 2019 2:25 pm
- Location: Italy
- Has thanked: 385 times
- Been thanked: 299 times
Re: Go-DSP-Guitar multichannel multi-effects processor
Hi,andrepxx wrote:As you may have noted, this discussion has spiked our interest, so we have taken our time to actually analyze the performance of go-dsp-guitar running in "real-time" (JACK-aware) mode.
first of all thanks for releasing your software under a free license and for the analysis!
I have not read all of your report but I as someone who doesn't like too much C++ (even though it's getting better and better and by the time C++23 is out, it will be a great language), I understand you.
My worry with Go is different, that it's a very low level language and might make DSP code a bit harder to write and read than a language suited for it, such as Faust (which compiles to C++). But props for trying to make something musical with it!
The community of believers was of one heart and mind, and no one claimed that any of his possessions was his own, but they had everything in common. [Acts 4:32]
Please donate time (even bug reports) or money to libre software
Jam on openSUSE + GeekosDAW!
Please donate time (even bug reports) or money to libre software
Jam on openSUSE + GeekosDAW!
-
- Established Member
- Posts: 34
- Joined: Mon Oct 01, 2012 3:04 pm
- Has thanked: 2 times
- Been thanked: 2 times
Re: Go-DSP-Guitar multichannel multi-effects processor
I can't contribute much technically to the conversation, but what I'll do when I next get a break is download this, compile it, and try it out.
It can't hurt to see how it performs/sounds/works!
It can't hurt to see how it performs/sounds/works!
Re: Go-DSP-Guitar multichannel multi-effects processor
Well, I intentionally wanted to use a general-purpose programming language instead of some rather obscure domain-specific language, since it would be easier (not harder) to write code in it and also more familiar for other developers.My worry with Go is different, that it's a very low level language and might make DSP code a bit harder to write and read than a language suited for it, such as Faust (which compiles to C++).
You see, Faust is a research project from a university. I doubt you'd see much use of it outside of the academic world (aside from Guitarix - I'm aware of the fact that it uses Faust extensively).
In fact, Golang is ... well, of course it depends on what you compare it to, but I think it's a rather "high level" language, especially compared to C or C++, which you'd "normally" use for DSP. I regard it as a hybrid between Python and C - I know that sounds weird, but it's actually a lot like that.
It has pointers and structs, "call by value" semantics and compiles straight down to optimized machine code, just like C.
On the other hand, it has automatic memory management and quite an extensive standard library, not quite as large, but also not that far from Python's, definitely with much "higher level" functionality than what C provides. (You can just 'import "net/http"' and you will get an entire web server in your application. You have support for XML, JSON, a templating engine, several crypto algorithms, etc. all in the standard library.)
It's statically typed and has one of the strictest type systems I know. (You cannot even add unsigned integers of different bit widths. You cannot even add an "int" and an "int64", even on a system where "int" is 64 bits wide, etc.) However, it uses type inference basically everywhere, so that the code reads a lot like that of a dynamically typed language. (Normally, you will only have explicit type declarations in struct definitions and in function signatures - and in type conversions of course. )
Yep, thanks a lot!I can't contribute much technically to the conversation, but what I'll do when I next get a break is download this, compile it, and try it out.
Keep in mind that we currently do not use feature branches, so "master" is really what's in development.
Specifically, we currently have two versions out that we consider "current", v1.5.0 is currently the latest stable, v1.5.1 improves upon that code and stability wise, but measurements show that performance of v1.5.1 compared to that of v1.5.0 is a mixed bag, which means some things are faster in v1.5.0 and others in v1.5.1. That's why we tagged v1.5.1 as a "pre-release version". We will try to improve upon that, especially considering that we now know pretty well where our CPU time goes.
Re: Go-DSP-Guitar multichannel multi-effects processor
I did not follow the whole conversation (I am too dumb to follow along the details) but this caught my attention. In the jargon I am used to, sub-millisecond means "up to 1 ms worst case scenario".andrepxx wrote:Even if they occur, garbage-collection "pauses" in Go are sub-millisecond on any decent system.
To put things into perspective, assume we are operating at 48 kHz sample rate on a 32 buffer size. That would mean that the audio callback of any application has less than 0.6 ms to process all the samples in the buffer. This time is sub-millisecond, and we can construct many more examples with different sample rates and buffer sizes. I guess my point is that, in the domain of computer audio DSP, sub-millisecond times are not really that short. They are enough to cause an xrun which is as big as an entire buffer. Things will often get worse, I reckon, if multiple channels need to be served.
Anyway, maybe it doesn't matter too much at the end. By this I mean: as long as the performance is well quantified, and one knows what to expect from the software, then I don't see issues. Like: maybe I will not be able to run go-dsp-guitar on a raspberry pi at 16 samples per frame and 96 kHz with 8 channels. But maybe I am very happy to use it in my DAW for post-processing my track without needing any low-latency at all... Or maybe it is OK at 48 kHz and more reasonable buffer sizes when used on a single channel.
Which brings me to:
Sounds like a reasonable approach. Let us know how it goes! I would try it too... but my pile of must-try-stuff is already too high.bmarkham wrote:I can't contribute much technically to the conversation, but what I'll do when I next get a break is download this, compile it, and try it out.
It can't hurt to see how it performs/sounds/works!
Re: Go-DSP-Guitar multichannel multi-effects processor
There is some stuff, including some Android audio apps: https://faust.grame.fr/community/made-w ... index.html. Probably not all the stuff that is made with Faust is in that page, but I reckon that Faust is indeed used not very often.andrepxx wrote:You see, Faust is a research project from a university. I doubt you'd see much use of it outside of the academic world (aside from Guitarix - I'm aware of the fact that it uses Faust extensively).
I do not really want to get into any holy war, as I am hardly proficient with Faust to begin with, but I see very often the equation "language from research project from a university = unstable completely useless mess unusable in the real world of real people that code for real" halfway implied very often in statements like the one above, or blatantly stated. I have seen in often in regards to Julia too, for example. Which is why I want to address it.
I do not really care, nor I am pretending that your comment means that you consider Faust totally useless, but my feeling about that attitude in general, independently from the degree it is manifested, is that it looks more like cultural resistance rather than something technically motivated. Kinda like the instinct of rejection I myself, and many others, have when I see "garbage collected" next to "DSP": maybe in many useful computer audio cases the things can go reasonably good together. After all, when we need to do mission critical DSP, we do not use computers at all but only dedicated chips. As far as DSP goes, audio is actually pretty relaxed in terms of requirements. I have worked with DSP systems that were allowed a worst case scenario latency of 20 microseconds, for example. Computers cannot keep up.
Faust is cool, let's love it more! And maybe Go too.