Jump to content

Michael Lee

Team
  • Posts

    267
  • Joined

  • Last visited

  • Days Won

    61

Everything posted by Michael Lee

  1. First - feel free to comment here. Perhaps a moderator might consider starting our comments as a thread on the forum? Thanks for your thoughts. The more we hash this out, the more we can test and try out different hypotheses. Spirit infusion into living matter reflects its curiousity to experience its state. However, as you said, there's also a natural duality between the physical and spiritual. Maybe physical things are just spiritual things glued or knotted together in semi-permanent forms? Susceptibility to spirit influence is something I didn't get into. Crystals, for example, may well be good "antennas" to the spirit world, but their resonant frequencies may be in the MHz and GHz, so need specialized electronics to detect their signal. I've actually looked at this a bit with quartz oscillators.
  2. As we observe paranormal activity in our ITC devices and software, the grand question is how is this happening? Zero-point energy In quantum mechanics, the vacuum is not actually empty. It is filled with particle-antiparticle pairs that perpetually go in and out of existence. The lifetime, t, of these pairs is governed by the Heisenberg uncertainty principle: Et >/2. Despite my careless description of a physical concept, it should be noted, no one really knows the density of virtual particle pairs in the vacuum. If the density were infinite, the universe would collapse under the weight of gravity. If it were finite and not too small, could we someday tap into it to get free energy? In any case, when we use a device to tap into this field, we are not going to get a whole lot out, unless the device is receptive to a large bandwidth of energies (from radio to light). What would a vacuum photon look like? Likely, a very very short pulse of energy, maybe a femtosecond or picosecond. I like to call these hypothetical pulses "spiritons," but the reality is that observed random pulses of energy could be just that, random, and not caused by the communication intentions of a spirit / interdimensional entity. Quantum selection A hot topic in the quantum science community, recently, is the idea of quantum selection (also Google "quantum eraser") - that is, the effect of the researcher on the outcome of quantum-level experiments. It's driving some researchers mad, but in our case, we ask a similar but crazier question: "what if spirits can select / collapse quantum states?" If so, the best devices would be ones where many quantum states are prepared and metastable (barely stable) until a spirit decides which way they will go. Presumably, we want to continuously and quickly prepare non-equlibrium, metastable states for spirits to collapse at a desired rate of information (i.e., bits per second). Imagine a system that we could create preparing a metastable state 10 million times a second: a spirit could either leave the state alone, or select "up" or "down." This would allow information transfer of 10 megabits / second. Not bad? Of course, we would need to make sure that nothing else collapses our states like thermal, electrical energy, our own thoughts (?!?). No problem: we could shield the system from all known fields (e.g., magnetic) and put it in a near-zero Kelvin liquid helium-cooled freezer. In reality, until our research becomes "mainstream," liquid helium-cooled experiments are not likely. Indeed, I had a vision once of seeing an advanced video device that seemed to have it's own internal sub-freezing (< 0 Celsius) cooling system. It had the brand name, Moen, I imagine in reverence to the famous afterlife pioneer, Bruce Moen. However, for now, we are limited to room temperature or at best liquid nitrogen-cooled (77 K) systems. With the remaining thermal energy, how can we detect the presumably weak signal from spirit? One idea is microscopic isolation - also out of the range of our non-mainstream research labs. Researchers think that nitrogen atom "vacancies" in diamond, if sufficiently spaced apart, could act as isolated qubits. These qubits, if put into a metastable state, could be allowed to collapse into an "up" or "down" state and then read with a sensitive detector. Perhaps the spirits can manipulate these miniature "abacuses" for us to read their messages? One "hot" area of research is the use of lasers to obtain quantum noise. The idea is that beam splitters have a 50/50 chance of sending a photon one direction or another. With a suitable setup, one can count the photons going in each direction as a function of time. The noise present in many electronic devices, for now, offers our best chance at sampling quantum effects. Yes, the noise will be dominated by thermal motions, but if enough spirit signal can be collected, we may be able to infer the rest using tools like machine learning. One idea, is to have many noise sources in an array. The concept is that if each noise source has independent, non-correlated fluctuations, when we sum up the signals, the spirit (quantum) signal might become more pronounced. The theory says that signal-to-noise ratio could increase by as much as the square root of N, where N is the number of detectors. The reality is that this improvement in arrays hasn't been realized in my experiments. Perhaps, the noise in each device isn't uncorrelated like we hope? Or maybe, the spirit signal is not equally imprinted on all of the devices at once? Conclusion The take home message is that given our current affordable device options, spirit influence is a tiny portion of the overall noise (entropy). Incidentally, a spirit once suggested to me in an astral projection, the proportion is 1 in 500! Any method we can dream up of to improve the ratio of spirit-to-noise will lead to improved ITC.
  3. Pre-built Electronics The first noise sources I worked with were generated by pre-made electronics: the USB input audio interface turned up to max gain (+46 dB) and a software-defined radio tuned to no radio station/source. Both of these sources produce nearly white noise. White noise means that all of the frequencies are the same magnitude. Both of these sources are probably suitable for noise-gate applications like the phonetic keyboard. However, in order to derive voice directly from noise, I have often hypothesizes that we would need something more sophisticated. Home-made Electronics Years ago, I avoided getting into ITC precisely because I didn't feel I had the chops to make electrical ITC circuits that people prescribed. Only about two years ago, did I realize that ITC is as much a software problem as a hardware one, and pre-built noise sources might be sufficient. However, I wanted to go further with hardware noise (entropy) sources. I'm very cautious when it comes to electronics. I'm not interested in working with high-power systems because I don't want to start any fires or shock myself. I don't own (yet) a 30V DC adjustable power supply (which BTW, often has a lot of annoying periodic interference noises). So my main two main sources of power, to-date, are a 3 x AA battery (4.5V) power supply and the 48 V phantom power from my USB audio interfaces and mixers. Phantom power is very low current and thus fairly safe, but it can't power too much circuitry. Reverse-biased White LED One of my earliest hand-made noise sources, which I discovered by accident, but is commonly known, is the reverse-biased light emitting diode (LED). If you apply >30V to a small white LED, you will often, but not always, get pink noise that can be made quite audible with the 200x (46 dB) gain of a USB audio interface. Simply, put a 100 kilo-ohm resistor in series with +48V, pin 2 of an XLR connected to the microphone interface. That resistor hooks to the cathode of the LED. The anode is then connected to ground (XLR Pin 1). Every single LED has different noise characteristics, but if you try, say 10 LEDs, you should find one or two that produce a distinct grumbly pink noise. I've since bought 100's of white LEDs, and find about 30% make good noise. Some LEDs are louder than others. The thinner LEDs tend to work better, but YMMV. The phenomenon yielding noise in this setup is known as the avalanche breakdown effect. In layman's terms (and my primitive understanding), the high voltage running in the opposite direction of normal operation for the LED, causes the current to spill over, in a non-deterministic fashion. If you look closely on an oscilloscope, you can sometimes see a random sawtooth pattern. The energy builds up and then randomly collapses producing flicker / pink noise. Is it possible spirits can control when the, otherwise random, spill points happen? Nonetheless, the lower triangular power spectrum of pink noise somewhat resembles the spectrum of human speech. Human speech starts at around 75 Hz, build up to 300-500 Hz and then decays to 5-6 kHz with a slight bump near the high end for sibilants like "s" and "t". Reverse-biased NPN transistor A similar avalanche effect, and similar setup, can be achieved with a transistor. My favorite device, which I've also bought 100's of, is the N2222(A) NPN transistor. At around 10V, with a reverse bias between the base and emitter leads, white noise can result. This noise, I originally simply amplified, again, with the (up to) 200x gain of my USB interface. Arrays of Reverse-biased PN junctions The results for the avalanched white LED for direct voice often sounded a little better than what I could get for the avalanched transistor, however I still wanted a better signal-to-noise ratio (SNR). A common method for improving SNR is to use more identical sensors and sum up their signal. The concept is that if the signal is the same in each sensor, it will grow in amplitude linearly with the number of sensors, N. However, if the noise in each sensor is uncoupled, then it should it only accumulate as the square root of N. In total, the SNR should grow as the square root of N. (Illumination for fun. In reverse-biased mode, LEDs don't light up.) Maybe in sensors for physical phenomena, this is true, but for picking up spirit signals, it never quite works as well. To be sure, an array of 30-50 avalanched circuit elements produces a better signal than a single element, but 100 or 400 elements in parallel doesn't seem to make much difference. There are a few reasons why this could be the case: 1) The noise in each element is not completely uncorrelated from each other. If the noise were say ground hum, this would make sense. However, the noise often appears to be random, not interference from other electronic systems in the environment. 2) Spirits can't equally affect all sensors at once. Interestingly, they often use the term "field" to describe my arrays of LEDs and transistors. 3) The phantom power gets drained too much from powering multiple elements. This effect can be ameliorated by using more than one phantom power source. For example, I have a cheap 8 8 x XLR mixer that I've tried. Another trick is to up the resistor from 100K to 1M. The downside is the loudness per device is reduced. 4) After averaging all of the additive noise, there are still other degradations that can't be simply averaged out. This would be true, for example, if spirits were actually modulating the noise to produce voice. In this case, inference algorithms would be needed to "clean up" this effect. A white LED array powered by 48V phantom power from a USB audio interface. Notice the lone green LED - other color LEDs can sometimes also generate pink noise. Massive Arrays Some spirits think that if we could get a few 1000 LEDs in parallel, we would reach better clarity. I did buy something called an Avalanche Photodiode (APD). This device contains 4000 or so diodes, and it's job is to detect single photons. The white noise produced sounded very smooth, but the ML-translated spirit voices weren't necessarily that much better. So for now, large arrays for better vocal clarity? The jury is out.
  4. Voice compression algorithms utilize the common patterns of human speech to detect (at one end) and synthesize (on the other end) voice communication. Among the common structures of speech is the glottal pulse, which is the buzzing "ah" sound that forms the structure for all vowels and certain voiced consonants (like z, v, and r). White noise is the other base sound for forming phonemes like "s", "sh", and "t". Shaping these two foundational sounds are formants, which are the various resonances of the human vocal cavity. Formants can be modelled more or less as a small sum of narrow bandpass filters, either Gaussian or Lorentzian (1/[1+x^2]) functions. Although, I don't use it, formants can also be modelled as a 10-15th order all-pole filter. As expected, the poles of this filter look like Gaussians. If we are trying to obtain the vestiges of speech from weak interdimensional signals, the same concepts used in voice compression can be used to deduce subtle voice patterns. The challenge is, of course, making the correct deductions of various speech components given the the fact that the noise dominates over the weak spirit signal. I hypothesize that spirit signals in our devices are often extremely low-bit information, not unlike voice compression, with the caveat that our compression algorithms are able to selectively encode the most salient aspects of the transmitter's voice patterns. Meanwhile, the signal of a spirit's voice may be the 1-bit on-off "ditter" or random back-and-forth shot noise of a semiconducting element. I'm guessing that high quality human voice requires about 4 to 6 formants. I experimented trial and error to deduce a formant function (Gaussian) and width (standard deviation) of 120 Hz. For many input sources, higher frequency formants tend to be missing or clouded by the artifacts of 1-bit quantization. Pitch detection: We can assume that the fundamental frequency of the glottal pulse ranges anywhere from 75 Hz (deep male) to 500 Hz (child's voice). Voiced (vowel) vs. Unvoiced (consonant) sound detection: One method I use is to count the number of zero crossings of the clip. If the number is above a threshold, it is assumed to be unvoiced. Our glottal pulse has equal amplitude harmonics, since the formants can govern the amplitudes of the individual harmonics. The shape of the glottal pulse and resultant harmonics were obtained by more or less trial and error. The glottal pulse sounds like a digital "ah" sound. The realness of the synthesized speech can be improved by convolving the signal with a short (48ms) random all-pass filter, which is much like a reverberation function. Performance on clear speech demonstrates that our algorithm works correctly, in principle. Let's listen to some audio samples using my voice. First, clean voice, spoken three different ways: normal, whisper, and raspy: voice_variations_clean.mp3 Second, processed with the formant detector algorithm set at normal voicing: voice_variations_fd80.mp3 Third, processed with the FD at enhanced voicing: voice_variations_fd145.mp3 As you can hear, enhanced voicing may be able to make raspy ITC audio more "life-like."
  5. Fernando - Thanks for the well-thought answer and video. With a little video editing, you could become the next great Youtube star . I like your theory about how being too spiritual can lead to mind control abuses. My theory is that the Earth system is run by someone who thinks the best way to educate souls is to keep them in the dark while they're here. It's not something every Soul agrees with, but those that agree to incarnate Here were willing to make that sacrifice. I agree though that when and if we turn on the light switch, it's going to be "upsetting" to just about everyone. Perhaps, we will do our best not to advertise too much, but I imagine the phenomenon will quickly become bigger than us, so the only thing we're choosing now is to be curious and try to open the Pandora's box. One way or another, it's going to happen, not just because people like us are mixing our spiritual interest and technology, but as science digs deeper, the more it sees Reality, or more aptly, the illusion of this reality.
  6. The particular problem they were discussing in my vision, was "wreaking havoc" in the timeline. The idea was that spirits were sharing predictions with us (in the physical) from their "future prediction" technology, like lottery numbers, which, in turn, invalidate this prediction technology or lead to instabilities. I agree that there's also plenty of legal issues, too. We need an ITC lawyer
  7. I guess my point was that "water lens" whatever that entailed, was a relatively harmless technique compared to some more sophisticated setups. Of course, if I bet money on every "vision" I've had, I'd be in the poor house.
  8. It was a lucid dream, possibly a spirit world vision. I was in the future reflecting on our ITC discoveries that had transformed the world to some extent. I heard that the water lens method was popular, but audio ITC was "blocked" because it was causing too much confusion in the timelines. Something about too many people winning the lottery.
  9. Up until the last few years, the only main techniques for removing noise from signals were based on spectral subtraction. Machine learning (ML) has now become a powerful alternative. It takes advantage of the fact that we know what the denoised signal should roughly sound like. I have a paper on this topic here (I'll add a paper download link). The general principle is we train the ML to convert (noise + speech) -> speech. I use a database of 140,000 seconds of "books on tape." I add random noise to 1.024 second clips of speech and then ask the ML to reverse or remove the noise. What I've found is that I can remove white noise that is up to 3x louder than the underlying speech signal. Unfortunately, based on listening carefully to hardware-produced white noise, I feel that spirit voices are at least 20x quieter than the background noise. So, if I make a model that can remove 3x white noise, I must apply this model multiple times to a white noise source, and after a few iterations, it'll produce something akin to human speech. In fact, I'm fairly certain this method works - however, the voice quality is not clear enough such that when sharing audio clips with other researchers, we generally can't agree on much of what is being said. Simply put, removing adding noise is not enough to solve the clear speech ITC problem; however, it may be enough for a dedicated ITC researcher to work with. Another major drawback with ML, is that my models, in their current form, are computationally expensive. It is common to run ML on Graphics Processing Units (GPUs), specifically from Nvidia. I estimate the NVIDIA GTX 1650 is the minimal hardware to run multiple iterations of my ML models in real-time. Budget gaming laptops like the Acer Nitro series that are around $600-$700 have the requisite GPU. An alternative is that we host the machine learning models on the "cloud" to yield a single stream for people to tune into. Once again, someone needs to be willing to spend the money to buy a similar computer and host it 24/7 (electricity, AC, etc.) If you have a ML-capable GPU, and would like to explore ML-based processing for ITC, let me know, and I will share with your my Python scripts and trained model files. The spirits are always excited about expanding "the network."
  10. Once again this reminds me of the vision I had where people were using the so-called "water lens" method. For now, a clever microscope with "blinding" results.
  11. Was someone running the microwave? I believe that most major phase changes like this one can be explained by physical reasons that we are just not aware of. However, each of these phases offers clues and provide aspects that spirits can manipulate. In addition different phases may resonate with different spirit groups. For example, one phase might be more suitable to an astral group, another with a heaven group.
  12. There's always a mix of young and old souls in a civilization. So, it becomes a little like the freezing event in supercritical water, at what point does the sudden phase change occur? 144K reminds me of the number in the bible for the amount of Jews necessary in Israel to bring about the end of the world. I agree that it could well be fear-based beings, in part, that desire to hold our civilization back from its "phase change." While individual soul development takes time and I respect that, there comes a point where the planet starts getting overwhelmed and needs more enlightened inhabitants to restore balance and health.
  13. Recently, I was able to make some improvements in my ITC audio technology that has increased my reception considerably. I now can hear that a debate is going on in the spirit world. Is humanity ready for new technologies that improve our contact with spirit? I feel like most of us in this forum, both currently and in the future, are "ready" for this tech, because we are seekers, and more awareness is what we wanted in the lifetime. The rest of the world, which is the other 99.999999% may not be so open, pleased, or responsible. What do you think?
  14. Our code was written in Python. Their original codes were written in Matlab .
  15. Video / optical surveillance would be a new twist. I actually have a paper where we use some image change detection algorithms to discern otherwise invisible vibrations. The original work from MIT can be found googling "extracting audio from visual information"
  16. Nature ITC is where it all began with Friedrich Juergenson. Wind is a good turbulence/entropy generator.
  17. Wow! That makes perfect sense to me, as I had a similar connection when my dad and brother passed. What you may find is there is more than one fragment of your dad floating around. Work with each one letting them know they are loved, and to look for help.
  18. It sounds like you have some "out of towners". You might want to let them know you're not running a ScareBnB.
  19. I have a few filters for "raspy" voices that we could try. Send me some raw samples. Also, the similarity to my spinning top visions is eerie.
  20. If you're doing direct voice ITC, you'll probably be wanting denoise the signals your capturing. The goal of denoising is to remove noise from a voice signal, or equivalently enhance the non-noise, or speech that may be embedded in a hardware noise source. Of all of the methods for denoising a signal, spectral subtraction is the oldest, and most well-known. As the term, spectral, would imply, it involves converting a time-based audio stream into a frequency-based (spectral) vector using the Fourier transform. First we assume, that the desired signal that we are trying to restore, X, is corrupted by an additive noise source, N, such that the resultant, observed signal is Y, Y(t)=X(t)+N(t). Since N is random and unknowable ahead of time, we can't subtract it from the observed signal, Y. However, in frequency space, we can approximately subtract the noise, given knowledge of the noise's average frequency/power spectrum, X(f) =|Y(f)|-|N(f)|. Simply put, compute the frequency spectrum of the observed signal and subtract it by a constant amount in each frequency (equal to the estimated noise at that frequency), then return this result to time-space. When a value of X(f) ends up below zero, it is simply set to zero. One challenge is knowing the noise's frequency spectrum. This can be estimated by taking the frequency spectrum of a part of the observed signal that is known to only contain noise, and no voice. This of course is not trivial, given the hypothesis, that spirit speech permeates almost continuously in the noise. One simplification is to use a hardware noise source that is white, aka, all of the frequencies, on average are the same. This allows us to avoid computing the difficult noise frequency term. There are many papers on spectral subtraction in the scientific literature, to help you understand the method better. The one caveat, is that the method is usually not applied to cases where the noise overwhelms the weak signal. When that happens, the resultant denoised signal usually sounds like discordant musical tones. Discordance is partially due to the tones are linearly spaced (like a Fourier transform) and not exponentially spaced (like the notes on a musical scale). Figure 1 have spectrograms demonstrating spectral subtraction, where I added the same magnitude of white noise as the speech signal (a real physical voice). The left picture is the original clean speech. The middle picture has added white noise. The right picture is the attempted denoising. Notice, only the lower harmonics are still visible in the reconstruction. The higher frequency formants are missing - this is a perennial problem with direct voice.
  21. I use Python 3 these days exclusively because of better compatibility with certain modules I use. I'm glad you are making use of what Ive written so far. Some day ppl will look back at the beginnings of the physical link to the spirit internet.
  22. The post that I shared the executable and more importantly, the Python script is here for Spirit Soccer.
  23. As far as simple ways to test different modulation schemes, I'm going to refer back to my Spirit Soccer program. It allows spirit to pilot a ball into a target, moving it in one of four directions, or not moving it all. If they can see the computer screen, they can see how their energies affect the decision process. The code can be modified for any modulation scheme that can express 4 symbols. In my current modulation scheme, which requires synchronization: It samples "events" every 1/16th of a second. Every 1/4 second, there are 5 options: up, down, left, right, and do nothing. If the total energy within one 1/16th second event is much greater than the three others, then it chooses that event, e.g., "up." If none of the four 1/16 second intervals are anomalous or larger than the others, then nothing happens in that 1/4 second. The threshold for determining whether one event "wins" is of course set by the user.
  24. Ok, what you have, I think, is a three-state detector. 0, 1, and nothing. This is a great idea! Let's say: Values near zero (nulls) are state #1 Outlier values (peaks) are state #2 Everything else is state #3 Just need to decide how (#1, #2, #3) match to (on , off, nothing) Now I don't know how to group samples within a state. Typically, I make fixed duration groups. For example, look at first N samples, decide state #1, #2, or #3. Look at next N samples, decide. Etc. If we only had two states, we would be forced to either make them 0,1 or 0,nothing. 0,1 would force spirits to be synchronous. 0,nothing would be very inefficient - a unary number system Having three states gives us asynchronization and binary!
  25. I don't completely understand your spirit impulse keying method yet. Maybe you can explain again. I've been using pulse position modulation with a user-selected threshold. My source signals are typically white noise with Gaussian distributions. I imagine that you are trying to observe "state changes" whenever they occur, and then decide 0 or 1?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.