Watch this video! :)

@Southside @SmoothOperatorGR

Thought y’all might find this water video interesting and the researchers he discusses.

1 Like

30 mins?!!??! Got a timestamp for the (possibly) interesting stuff?
“Water memory” ← is this Schaumberger reheated?

I suppose at 30 mins its not as bad as some of the multi-hour rants that other have posted here but the first couple of mins did not inspire me to spend more time on it.

Sorry if that seems harsh, give me a link to the good bit(s) and I’ll be happy to apologise.

1 Like

He does like to ramble and interject unneeded commentary.

2 Likes

He mentions this guys investigation of water memory. Sorry this ones even longer. :-\

1 Like

Call me cynical but I have to wonder about a guy who spends a fortune on a fruity monitor which he himself cannot see - then he uses it to show stock video of a comforting fire.

Honestly Im not unreasonably grumpy, just very very cynical.

If this is related to Schaumberger then I am interested, However it is well to note that while Schaumberger was proven correct on many aspects, in other areas his work “suffered from a lack of peer review” shall we say…

Having said all that, if my dream was to come true and I did have proper research facilities and unlimited budget then I would be looking to tell my team to repeat much of Schaumbergers work from the start with modern day instrumentation, data logging and CFD techniques.

1 Like

Schauberger was invited to Texas after the war with this son. He claimed Industrialists stole everything from him. I’m not surprised.

1 Like

Yes, he died shortly after returning from Texas.
And it would seem that his work died with him.

1 Like

He is a known scammer LOL

46 seconds

“You don’t need a formal conspiracy when interests converge. These people went to the same universities, they belong to the same fraternities. They’re on the same boards of directors they belong to the same country clubs, they have like interests and they don’t need to call a meeting, they know what’s good for them and they’re getting it. There used to be 7 oil companies, there are now 3 and will soon be 2. Things that matter in this country have been reduced in choice…” - George Carlin

3 Likes

Probably someone has done it before as such a great quote, but I made a new one anyway:

2 Likes

Great interview - if you have any doubts about James O’Keefe’s integrity, this should clear them up.
#michaelmalice #jamesokeefe

1 Like

23 min.

Great break down on CBDCs. It is listenable no need to watch.

1 Like

Just because…

Percussive maintenance for the win!!

FTX and Sam Bankman-Fried have a dark & scary agenda which involves the CFTC and the Binance exchange. The CFTC has been investigating Binance since last year & it seems FTX has become a pawn in their investigation. Is Sam-Bankman luring CZ Binance into a regulator trap!!?? Tune in to find out the scary truth…

10 min.

Judicial Watch - 2yrs ago - don’t forget:

Caught on tape … but nothing ever happened.

Democracy is a fraud.

Interesting review of Linux phone … tempts a better future is possible… especially talk of the middle ground eOS fix for Android apps to the end.

2 Likes

I think he’s wrong at the end where he says it’s too late for open source … people are getting poorer and poorer thanks to money printing and endless scams … it’s a matter of time only before the profitability of closed source apps goes below the waterline.

So you know all those huge massive particle accelerators? … yeah, in the future, same performance may fit into a building.

4 Likes

Here a video about longtermists. A term I have seen mentioned a couple of times on this forum.

1 Like

8 min.

Introducing Whisper

We’ve trained and are open-sourcing a neural net called Whisper that approaches human level robustness and accuracy on English speech recognition.

Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. We show that the use of such a large and diverse dataset leads to improved robustness to accents, background noise and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing.

asr-summary-of-model-architecture-desktop

The Whisper architecture is a simple end-to-end approach, implemented as an encoder-decoder Transformer. Input audio is split into 30-second chunks, converted into a log-Mel spectrogram, and then passed into an encoder. A decoder is trained to predict the corresponding text caption, intermixed with special tokens that direct the single model to perform tasks such as language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.
asr-details-desktop

Other existing approaches frequently use smaller, more closely paired audio-text training datasets,123 or use broad but unsupervised audio pretraining.456 Because Whisper was trained on a large and diverse dataset and was not fine-tuned to any specific one, it does not beat models that specialize in LibriSpeech performance, a famously competitive benchmark in speech recognition. However, when we measure Whisper’s zero-shot performance across many diverse datasets we find it is much more robust and makes 50% fewer errors than those models.

About a third of Whisper’s audio dataset is non-English, and it is alternately given the task of transcribing in the original language or translating to English. We find this approach is particularly effective at learning speech to text translation and outperforms the supervised SOTA on CoVoST2 to English translation zero-shot.

4 Likes