Jump to content
  • Blog Entries

      What if every thought we have is in another dimension. And each time we think a thought we create multiple copies. Multiple universes. Infinite universes.

      Every time we think the SAME thought we are creating many copies of that universe. And once there are enough copies in enough universes they begin to resonate off each other. They start to layer, one upon another. Like the reflections in a singer's microphone feeding back into the speaker. The most dominant thoughts win. The most repeatedly occurring thoughts begin to refine themselves into a wave. The beauty of the whole becomes glory in its singular form. It is not singular, but there are enough echoes of it for it to think to itself that it is singular.
      Technical Note: White noise is a representation of all, of the whole. When viewed as a whole it is meaningless. However, if you use a filter to only listen to a narrow band of sound, you will hear a tone. Was the tone already there? Does it have more energy than it did before? No. It represents itself as a singular tone.
      Just like microphone feedback - it is not singular in its first reiteration, or its second, or third....
      A composite over time is what creates physical reality, more or less the atoms come together when their master calls. When the dominance of one particular thought, or feeling, or imagination of a large group of people resonates - some of it will appear in its purest composite form. And much of it will be distorted.
      And these thoughts.....they begin to become physical to us now.
      I will introduce some new "steampunk" or acoustic ITC methods in the next post. 
      But first I want to share with you some theories I have about audio ITC.
      To me, reception of spirit / interdimensional signals has at least three components:
      1) Sensitivity to the signal
      2) Resonant modes of the detector
      3) Driving energy
      Sensitivity means that whatever spirits can use to communicate with us, like virtual photons, wavefunction selection, or whatever, our devices can pick up these changes/anomalies. The most traditional detectors people have used are microphones and scratchy diodes. Presumably, the microphone picks up small air pressure changes and or electromagnetic signals affecting its inductive coil. The diode could be picking up radio waves, scalar waves, thermal changes, etc.
      In any case, every ITC detector has some sort of sensitivity. Detectors can be virtually anything, like water or even a hard rock. But as long as we can percieve (humanly or electronically) changes in that detector, then it should work. The question though, is how sensitive to spirit is that detector compared to others. That, I don't have an answer, but we can certainly select our favorites for experimentation based on perceived improvements or ease of use.
      Resonant modes refers to the available states of the detector or broader physical ITC system. It can be thought of as the frequency spectrum of physical and non-physical signals emanating from a given device. For example, some devices have two states. They either have short "pops" or nothing at all. Some have pops of differing duration and amplitude. Some devices emit a constant white noises. Others could have certain dominant tones like wind chimes. Even others could have a dominant on/off buzzing sound like some of Andres' creations.
      In each case, there's an "available" set of frequencies that can be produced. Obviously, if we wanted to hear a perfect human voice, the device would need to emit all of the frequencies between a range of approximately 100-8000 Hz. Devices that emit white noise, sound great for this challenge, but often suffer from overdoing it in the last factor...
      Driving energy refers to how much our device is physically stimulated. A great example is the work of Anabela Cardoso. She finds that a microphone with noise playing in the background is much better than a microphone in a completely silent room. The added noise is "driving energy." It is both a source of energy for the spirits to manipulate and it ruffles up the air molecules in the room providing a "canvas" for spirit signal implantation. 
      But too much driving energy may not be such a good thing. If I play a super loud buzzing sound (to represent the human glottal voice pattern), we're not going to hear any variations in that buzz, unless we use some pretty serious noise cancellation software. Meanwhile, if I supply a light amount of buzz, the variations may begin to be noticeable to the human ear.
      Here's another "overdrive" situation: radio static. Radio static when evaluated with a spectrogram looks as random as can be. You have to apply a lot of software noise removal to extract out anomalous signals. I would argue that too much noise makes the filtering process more difficult than it needs to be. One way people balance out the noise is by playing it over speaker to be picked up by second microphone.
      Ok, enough rambling about theories. In my next post, hopefully, I will have some interesting results to share.
      Since early 2019, I have been working on software to extract voices from physical noise/signals. My earliest attempts used other people's software, mainly an algorithm called "spectral subtraction." in a ReaFir noise reduction plugin. This converts the noise into the frequency spectrum, where slight imprints of voice can be discovered and emphasized.
      We now enter the year 2022 - Spectral subtraction is still a very valuable tool, but it is only the beginning of a process I've developed for extracting voices. I've created machine-learning-based models to find and emphasize voices. I've also made a program that finds and generates "formants" or peaks in the harmonic buzz of the human voice.
      I'm finally releasing my full software, in Python. I use a very similar version of) this code in all of my experiments (FPGAs, radio noise, etc.)
      I would've liked to have shared it as an executable, like I did Spiricam, but Python executable-makers are notoriously buggy. Another reason I've hesitated is=n sharing the code sooner is that it used to require some heavy GPU resources. However, thanks to some software developments by Google, my ML models seem to run OK on the CPU pretty well in real-time.

      So if you want to try out my code, you'll have to do some command-line steps and you'll have to at minimum install a free program called Miniconda, or a larger version called Anaconda with Python version 3.8, 64-bit. Maybe a few GBs of disk storage will be required.
      Here's the link to the code: https://drive.google.com/drive/folders/1fu6hAuE0AbhbQjx0Ts_3Ju0QRJ0awxRM?usp=sharing
      In the directory is a README.txt, which I'll update as we iron out the instructions.
      When I've resolved most of the common issues, I'll make the code into a ZIP file for the Downloads sections.
      For now, feel free to ask questions in the comments. As I like to say "The spirits are waiting!"
      Audacity is a perfect tool for ITC. If you use it consequently like I am doing since years for ITC you don't want to miss it anymore. To get Audacity you first need to download it here. After installation you can make some basic setups I will show you now and that are optimized for ITC. I configured Audacity mainly along with its usage, thus everything I found out as best practice is empirically found.
      First go to Edit->Preferences:

      Another menu with lots of tabs is showing up. Here we can make all the basic settings we want to use in order to facilitate our work. I just will talk about the tabs that are useful for us. Leave the rest of them in default settings.

      The first tab contains settings for the input/output devices. Since all my ITC devices are using the line input of my soundcard it is favorable to specify it for the recording device. In my eyes it makes sense to set mono recordings as default. Stereo tracks don't make sense in ITC. The rest can stay as it is.

      Next interesting tab is the recording tab. I valued it as useful to put a default track name "ITC Track" in the Custom Track Name field and enable the naming extension by time and date.

      Quality settings are important. Sometimes we need to do a lot post processing. Therefor it is favorable to set the sample rate up to 95KHz instead 48KHz and 32 bit Float to preserve enough reserve in accuracy. For the same reasons i selected the highest quality settings for the Sample Rate Converter as shown.

      For copyright reasons Audacity comes without an MP3 encoder. We will do all our exports in MP3 since we want to save disk space and bandwith. Installing an MP3 encoder is not much of a problem because you can usethe famous Lame encoder. It's free and you can get it  here. After installation it should be visible in the Libraries tab. If not use the buttons for FFmpeg Library to show Audacity the installation paths.

      The Effects are not important for the start but later we will use the Nyquist extensions. Thus it makes sense to check the respective box already.

      Now restart Audacity. In the lower left corner you should see now the new Project sample rate of 96.000 Hz. For the audio position check hours, minute, seconds and milliseconds and for the selected areas "Start and Length of selection". This is useful if me measure duration and spacing later.

      In the upper left corner you should see now Line input as default.
      Now you are done. Audacity is now ready for ITC.
      A nice expewriment from today was an idea coming from discussions with Jeff about applying cardboard tubes as bandpass resonators for human speech. It was a very simple setup. I took a piece of cardboard tube, put in a buzzer into one end (working on a steady frequency), covered this end with one hand and with the other hand I opened and closed the upper end of the tube in a random manner.
      Basically the results are promising ic combined with random excitation of the buzzer.

      Unbenannt 6.mp4    
      There is a variety of metaphysical and spiritualist material on this blog, and, naturally, metaphysical/spiritualist propositions are not scientific.
      The fundamental feature of a scientific propositions is that they are highly dependable. Science doesn't pretend to hold the ultimate truth, only the truest that we can possibly get at ay given moment. But for all practical purposes, the closest we can get to the truth is essentially as good as the actual, ultimate truth itself.
      But the whole of reality, with its objects, processes, actors and rules is, unfortunately, far larger than what we scientifically know about it today. And, in my informed but humble opinion, it is going to be like that for a long time.
      What are we to do with all of the things that we wish to know, but we can't know scientifically? While many people are quite happy with the limits of science (which expand all the time to be fair), and don't need to know unless scientifically, several others, like me—and presumably any reader of this blog—would like to know something about all the things outside of the current scope of science. But, how does one go about that in a rational way?
      That is a trick question, because a complete, totally satisfying answer would have to say that the (only) rational way to know something about reality is through science. And that is, in itself, correct. But we are trying to work around the problem of knowing, nevertheless, what we cannot, scientifically, know.
      A practical solution is to reconsider and relax the conditions for knowledge. That is, traditionally, knowledge is said to be Justified True Belief. That seems like a simple definition. For example, if I believe something about nature, and science provides me with a postulate that matches that belief, then the belief is true and I have the scientific justification for believing it, hence, I can loudly assert that I don't just belief so, I actually know so.
      But as it turns out, however, the conditions under which it is reasonable to promote a belief to the rank of knowledge are far from simple. There are several considerations, different approaches (such as, reliabilism vs evidentialism), and even different theories of what the truth is to begin with.
      Binary classifying propositions as either (mere) believes or knowledge, is, probably, an impediment on itself. Can there be of degrees of truth? Can a proposition have a probability of being truth, rather than being just true or false? We humans have degrees of certainty, and that is the really important cognitive feature of knowledge (that we are certain of the truth of the things we know, and we are only uncertain about the things we merely believe).
      That propositions can be probably true (rather than certainly true) is debatable and largely debated.
      I am a huge fan of this approach to the problem of knowledge and the truth, often known as Bayesian Epistemology.
      This approach can be summarized in these simple concepts: rather than promoting, by means of whatever epistemic process I choose, a (mere) belief, which does not certainly corresponds to a truth, to the category of knowledge, which does certainly corresponds to a truth, what we can do is to consider any belief as having only a probability of corresponding to the truth. In this logic, there is just never the binary question of whether something is true or false. Rather, the question always is how much probably true it is.
      Bayesian epistemology, that is, the idea that believes only have a probability of corresponding to the truth, is specially practical, in my opinion, for it allows me to handle the certainty associated to the belief as a separate, related but still separate, epistemic process. That is... a scientific proposition has a very high probability of truth [1], and I can (and reasonably will), in consequence, attribute a high level of certainty to it. But then I can also come across a non-scientific proposition that is given, by whoever proposed it, a probability of truth of, say 60%, and attribute to it a level of certainty of, say, 30%. Or maybe 100%. It's up to me.
      Likewise, I can make a proposition giving it a probability of truth of, say, 40% (based on whatever criteria), while being myself a 100% certain about it (for whatever reason).
      The way I see it, Bayesian epistemology allows us to consider the full spectrum of degrees of truth for any set of propositions, so that we can produce, share, discuss, even teach (if appropriately done) a Body of Bayesian Knowledge. That is, a knowledge not of certain truths (like the scientific knowledge) but of probable truths.
      Bayesian knowledge is specially suited to cover the part of reality that is currently outside of the scope of science. The reason is that, the traditional, binary account of knowledge, is by definition true, thus, one is bound to atribute it full, or at least reasonably high, certainty.
      For example, if you read in a respectable, scientific source that a proton is made of three quarks, then you are now certain that this is the truth, for that is a piece of scientific knowledge.
      Bayesian knowledge, on the other hand, makes no promise of telling the truth. It only says that it is possibly the truth, with whatever measure of probability based on this and that. But then, you are not automatically bound to attribute any certainty to it. Instead, you are pretty much forced to judge for yourself. You might, for example, decide to dig up why it is said to be so, where does it come from, what is the evidence, etc...
      That is, a body of Bayesian knowledge does not forces itself into your own belief system. Instead, it simply presents itself, and forces you to integrate and adopt it as you see fit.
      Sheer speculation, for example, would be classified as Bayesian knowledge with a very low truth probability (whether admittedly or not). And ideally, in the process of integrating it into a given belief system, critical thinking and correct rational evaluation will discover that low truth probability and attach the appropriate certainty.
      Suppose that we consider that a truth probability of, say, 95% and above corresponds to scientific propositions; 20% or below to magical thinking, dogmatic doctrines, etc... between 20% and 50% to conjectures with little evidential support, or theories that clash with highly certain accepted knowledge. What would be a reasonable truth probability threshold and how would a proposition deserve it?
      This is not a trivial matter, but I would think that a propositional system with the following properties, can be reasonably given a truth probability in the range of 60% to 90%.
      (a) Well integrable into the accepted knowledge, that is, which doesn't contradict Science.
      (b) Supported by evidence, even if not scientific grade evidence, such as Clairvoyance, Out of Body Experiences (OBEs), or Automatic Writing.
      (c) Sufficiently well formulated, even if not with a scientific language, so that is is comprehensive and can be adequately taught and learned.
      (d) Can be falsified, that is, proven wrong.
      (e) Can be experimentally tested, even if the experiments are not reproducible by any experimenter (that is, the experimenter is so entangled with the experiment that it becomes one of the critical variables), or the measurements can not be performed with a human made machine (for example, the measurement can only be done by a human mind of certain characteristics, such as a Medium or a Psychic).
      (f) When the object of study becomes a part of science, the propositions are more likely to be refined than to be refuted.
      The inescapable hard work of scientific rigor puts all the burden into the scientist. Once they put out a theory (and is accepted by the scientific community), it comes with a sort of strong guarantee. Not that is is infallible, but that is highly probably the truth. Metaphysical and spiritual discourse, cannot have such a strong guarantee, but it can have some. Depending on the case, maybe weak, maybe moderate.
      In this blog you will find, therefore, Metaphysical and Spiritual Bayesian Knowledge which I think ranges from 60% to 90% of probably truth, depending on the specific subject. And you are specifically supposed to integrate and adopt into your own belief system, whatever part of this you deem reasonable, using your own judgment and criteria.
      I will try my best to provide in each post sufficient justification material so you can, in practice, judge for yourself.
      The ideas and propositions in this blog come from a variety of sources, but I will NOT, in general, make any formal reference list, mainly because in most cases the ideas cannot be simply traced back to this or that source.
      There are, however, some primary sources to which almost everything here can be traced back to.
      The most relevant one is the many books, conferences and general teachings of a very contemporary [2] Spiritist kind of Church from Argentina, known as Basilio Scientific School Association (BSSA). I am a member of this church, thus, a significant part of what is written in this Blog comes from there. There is an unofficial but very well written, comprehensive coverage of the core teachings of the BSSA in the book The Spiritual Theory
      The second most relevant source is the so-called Spiritist Doctrine as codified by Allan Kardec and his many books.
      Another quite influential source are the many many books of Theosophy I should clarify, however, that I am not really a huge fan of theosophy, specially the part developed by its founder Ms. Blavatsky. But this doctrine is nevertheless highly influential for me, is just that you won't find in this blog many direct correspondences or quotes [3].
      I am however, a huge fan of Rudolf Steiner, specially on his metaphysical work. He parted from theosophy and funded Anthroposophy.
      Other highly influential—and quite varied—sources include, but are not limited to, The Seth Material, A Course In Miracles, and many books on the After Life (there is very good reading list recommendation on this Keith Person's video).
      One which I can even specifically know, in many cases, since often that probability is directly integrated into the scientific theory. It was funded in 1917. I much prefer the works of C. W. Leadbeater and of Annie Besant.
      I've tried a variety of FPGA "designs." The one I'm sharing now produces musical tones similar to the white keys on a piano for six octaves.
      The tones are simple square waves which probably sound most like 8-bit video games from the 90's. The spacing of the tones use something called the just intonation temperament. Instead of powers of (1/12), just intonation uses simple fractions that are actually more in tune with each other. The downside of just intonation is that you can't easily change scales.
      Each tone is activated when a noise source running at 50 million bits per second sends out by chance 26 1's in a row. It sounds pretty unlikely, but given the number of bits per second and the number of tones (N = 42), it happens enough to produce music. Here's a sample:

      sample_16_26_36.mp3 Here's what the spectrogram of this sample looks like. Square waves produce a fundamental sine wave and odd harmonics (3x, 5x, 7x, etc.)

       The noise source is perhaps the most critical. In this particular project I'm using XOR'ed ring oscillators. These are described in the scientific literature. The big challenge is that no two ring oscillators are exactly alike. Despite having the same "length" of 101 delay gates, the idiosyncracies of chip transistors means the actual delay times will vary between each RO. Also, their susceptibility to noise will each be different. I can only hope that 16x RO's per tone is enough randomness to average out the variations so that each tone will trigger the same number of times. If you look carefully at the spectrogram, you will see certain tones are little more triggered than others.
      Here are some voice samples after the signal is translated via machine learning:
      "confident signal"

      "it mirrors wonderful"

      "given the opposite" 

      "now that we're talking"

      Finally, here's the Verilog code, for some future brave FPGA developer:
      // // Author: Michael S. Lee // Noise source-activate musical square wave tones // Started: 12/2021 // module XOR_loop_gate #(parameter N = 149) // Length of each RO (input wire clki, output wire gate0); wire [N:0] loop[M-1:0] /* synthesis keep */; reg [0:0] gate; reg [M-1:0] lpb; reg [L-1:0] buffer = 0; wire hit; reg [0:0] check; integer ctr; parameter period = 65536 * 48; // duration of tone (in 50 Mhz samples) parameter M = 16; // # of ring oscillators parameter L = 26; // length of seq. of 1s to turn on gate genvar i, k; integer j; generate for (i = 0; i < M; i = i + 1) begin: loopnum assign loop[i][N] = ~ (loop[i][0]); for (k = 0; k < N; k = k + 1) begin: loops assign loop[i][k] = loop[i][k+1]; end end endgenerate assign hit = ^(lpb); assign gate0 = gate; always @(posedge clki) begin for (j = 0 ; j < M; j = j + 1) begin lpb[j] <= loop[j][0]; end buffer = (buffer << 1) + hit; check = &(buffer); if (check == 1) begin ctr <= period; gate <= 1; end else if (ctr > 0) begin ctr <= ctr - 1; end else begin gate <= 0; end end endmodule module Direct_Voice(clk,out); input clk; output wire out; parameter N = 42; // # of musical tones parameter bits = 7; reg [bits+1:0] PWM = 0; genvar k; integer i; wire [N-1:0] outw /* synthesis keep */; reg[N-1:0] outr; reg[18:0] outp[N]; integer sum, suma[24]; reg[22:0] clk2 = 0, clk3 = 0, clk5 = 0, clk7 = 0, clk9 = 0, clk15 = 0; generate for (k = 0; k < N; k = k + 1) begin: prep XOR_loop_gate #(101) test(clk, outw[k]); end endgenerate assign out = PWM[bits+1]; always @(posedge clk) begin // Clocks for different musical tones clk2 = clk2 + 1; clk3 = clk3 + 3; clk5 = clk5 + 5; clk7 = clk7 + 7; clk9 = clk9 + 9; clk15 = clk15 + 15; // Convert wire gates to registers for (i = 0 ; i < N; i = i + 1) begin outr[i] <= outw[i]; end suma[0] = (outr[0] & clk2[19]) + (outr[1] & clk3[19]); suma[1] = (outr[2] & clk5[20]) + (outr[3] & clk7[20]); suma[2] = (outr[4] & clk9[21]) + (outr[5] & clk15[21]); suma[3] = (outr[6] & clk2[18]) + (outr[7] & clk3[18]); suma[4] = (outr[8] & clk5[19]) + (outr[9] & clk7[19]); suma[5] = (outr[10] & clk9[20]) + (outr[11] & clk15[20]); suma[6] = (outr[12] & clk2[17]) + (outr[13] & clk3[17]); suma[7] = (outr[14] & clk5[18]) + (outr[15] & clk7[18]); suma[8] = (outr[16] & clk9[19]) + (outr[17] & clk15[19]); suma[9] = (outr[18] & clk2[16]) + (outr[19] & clk3[16]); suma[10] = (outr[20] & clk5[17]) + (outr[21] & clk7[17]); suma[11] = (outr[22] & clk9[18]) + (outr[23] & clk15[18]); suma[12] = (outr[24] & clk2[15]) + (outr[25] & clk3[15]); suma[13] = (outr[26] & clk5[16]) + (outr[27] & clk7[16]); suma[14] = (outr[28] & clk9[17]) + (outr[29] & clk15[17]); suma[15] = (outr[30] & clk2[14]) + (outr[31] & clk3[14]); suma[16] = (outr[32] & clk5[15]) + (outr[33] & clk7[15]); suma[17] = (outr[34] & clk9[16]) + (outr[35] & clk15[16]); suma[18] = (outr[36] & clk2[13]) + (outr[37] & clk3[13]); suma[19] = (outr[38] & clk5[14]) + (outr[39] & clk7[14]); suma[20] = (outr[40] & clk9[15]) + (outr[41] & clk15[15]); // suma[21] = (outr[36] & clk2[20]) + (outr[37] & clk3[20]); // suma[22] = (outr[38] & clk5[21]) + (outr[39] & clk7[21]); // suma[23] = (outr[40] & clk9[22]) + (outr[41] & clk15[22]); sum = 0; for (i = 0 ; i < 21; i = i + 1) begin sum = sum + suma[i]; end PWM = PWM[bits:0] + sum; end endmodule  

      16_26_36_let_this_have_her_influence.mp3 16_26_36_with_portal_music.mp3

  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.