Jump to content

Michael Lee

Team
  • Posts

    267
  • Joined

  • Last visited

  • Days Won

    61

Everything posted by Michael Lee

  1. There are definitely limits on how much spirit can interact with devices. We're still working out the "rules." on how they do interact. Dr. Gary Schwartz, in his research, sets the bar at 1 bit per 10 seconds. Clear voice will require 1000's of bits / second. We would be happy with ~8 bits (one character) per second for text.
  2. I'm fairly sure that spirits can synchronize with our devices. This is required for my tone vocoder to work. However, having an asynchronous system would be good, too, to see which one has a lower error rate. It seems spirits can align with our various schemes, but have to contend with low signal-to-noise which can lead to errors.
  3. Kevin, thanks for the explanation! I agree spirit has some amazing tech to help us bridge the gap between our worlds. I'm also definitely interested in making my own digital "ectoplasm." aka a physical system that gives them a palette to work with, whether it be sound, images, etc.
  4. A random person on the internet suggested using a magnifying glass on the black and white image. It would be interesting to see if it was organic/random black ink speckles or a matrix of dots, indicatIve of a print process.
  5. Version 1.0.0

    64 downloads

    This program generates a constantly changing noise pattern based on a pseudo random number generator (PRNG). Scientifically speaking, PRNGs should not be able to be influenced by spirits, but there are two caveats. One is that the random seed used in Python is somehow affected (like timing). The other is that this may be an example of a partnership between our medium-capable suggestive minds and randomly generated shapes and forms. At some point, we will also introduce random visuals that derive from hardware sources, like the audio input of your computer. It will be interesting to see if the images improve. Download Program (The compiled program runs only in Windows 10.) Amorphous Field Zip File Click the link. Then click the down arrow button at the top right to download. Unzip the file (uncompressed over 700 MB!) and it will produce a folder. In this folder is an EXE file called amorphous.gui.exe that you can run. Source Code (Python; Any OS) If you have Python / Anaconda3, you can play with and modify the script. Let us know if there is a more efficient way to make an EXE from Python, or if you want, you can recode the program in another programming language. 10-2020_Rainbow_People_Python_byMichaelLee_Varanormal_REV1.00.py
  6. Version 1.0.0

    38 downloads

    Information about this download can be found here:
  7. After developing and experimenting with the phonetic typewriter, which is a noise-gated stream of user-supplied speech-like sound, I noticed that at times, it seemed like there was a mix of the expected audio and something else. This gave me reason to believe that there could be voices directly from the noise itself. Direct voice, as it were, corresponds to extracting the voices from the noise with no extra audio added in. This method is indeed the original method of spirit communication / listening from ITC/EVP pioneers Jurgenson and Raudive. Purists believe it is only the method, feeling that having user-supplied speech is a form of "cheating." However, as we learned earlier, as long as we know what the supplied audio was, we can determine if changes were made, or if certain phonemes were emphasized/amplified. Traditionally, direct voice can be a slow, arduous method. Start a tape recorder, ask a few questions, and wait for an answer of a few words to show up on the tape medium. Do this enough and collect samples of occasionally legible spoken words. My colleague, Keith Clark and others, have realized that there may be a lot more, almost continuous stream of anomalous speech in the noise if one utilizes post-processing denoising techniques on a hardware noise source (e.g., radio static). In fact, these processing techniques, through the form of various combinations of audio plugins, can be applied in real-time to the noise produce speech-like sounds. Where my research evolved in the direct voice arena was a systematic exploration of four software techniques for extracting a weak voice signal from an otherwise dominant noise source. I looked at spectral subtraction, musical vocoder, formant detection/synthesis and machine learning. In future articles, I will explain each method in more detail.
  8. Yes! We could feed the noise and signal through separate channels like L and R. Then, the signal would be clear as a whistle. I haven't tried this, yet, but it should doable in my scripts. One of the caveats I can see is that there may be a little magic (like voices coming through the noise, or signal manipulation) that we would lose.
  9. Here's where I posted the software, wave file, and instructions:
  10. Fernando, Thanks for your insights and sharing sources. Louis and I also have a theoretical science background, and of course Andres is an electronics whiz. Between all of us and others, maybe we can begin to make sense of the more ethereal take on things and find the connections between physical and spirit. All physical objects have some amount of dual in spirit. But, physical humans also happen to be one of the most complex systems that we know of so far in this universe. There must be at least a few ways our human system subtly but firmly interacts with spirit, so I'm interested to learn more. In addition some of us in the forum can collect more advice from spirit. I astral travel, however, unless Im in the higher planes, Im not going to get illuminating results like the sources you are sharing.
  11. Thanks for the document. Distilled to what I understand, gravity can have measurable effects on even quantum-scale systems like atoms. Non-paranormal researchers have been using the phenomenon to detect things like underground oil fields. So are spirits interacting with our systems via gravity? Maybe? I've imagined that whatever the spirits are doing, it's very subtle, and if researchers weren't looking for it, they would chalk it up to "noise," which is a catch-all term for everything they don't have time to find the cause for.
  12. Back in the old days, each spirit had their own style. Some waited patiently for a good phoneme, others just talked through them. They are currently trying to automate the process.
  13. Michael Lee

    bubbles1.jpg

    I took this picture before I got into ITC, trying to find pretty things to take close up pictures. It does have some nice fractally properties. I don't see any faces or anything. Maybe a message from aliens .
  14. Color version produces some interesting shapes and forms. Should I set this to music?
  15. I made this video yesterday that further explains and demonstrates the phonetic typewriter method for spirit communication.
  16. Yes. Its simply a regular speech file thats been rearranged randomly in 150 ms intervals. I first removed most of the silences in the speech.
  17. Andres - not that exact file, that one used 150 ms audio / 150 ms silence. The one I shared has no silences, because the new gate (noise gater) can tolerate it.
  18. Before I make a video showing the phonetic typewriter with my new Python noise gate, here's a video of my original phonetic typewriter using the Maximus noise gate in FL Studio from January 2019. Originally, the stream of audio was alternating between 150 ms speech and 150 ms silence. The gate opened for 100 ms when it detected a high sample. The audio stream I used was always the same recording, starting from the beginning.
  19. My experience suggests that for voice, a digitally activated tone vocoder performs better than direct voice. I'll get around to explaining it in more detail in a few weeks. Digital spirit bits have an error rate, so trying to find the best hardware noise sources is another challenge.
  20. I highly recommend using my spirit soccer program. I'll put it in the Software section. It allows us to test different digital schemes, to see which is easiest for spirits to use. It accepts 4 possible inputs: left, right, up and down to direct a ball towards the target in real time. How we convert hardware input to these 4 choices is the "modulation scheme"
  21. I havent tried blank screens, so I don't know. One thing that one might try is taking pictures in darkness with high ISO settings. The idea is that there is some amount of noise (from the camera sensor) for spirits to work with.
  22. Noise gates are an integral part of most ITC systems. They are a subset of something called expanders, whose job is to expand the dynamic range of a certain ranges of the signal. Below a gate, noise is attenuated. Above the gate, the signal is amplified to achieve more clarity. If you listen to the raw sound of a typical entropy / noise source, it's sounds pretty boring, as if there's nothing interesting or "paranormal" going on. However, when you expand or noise gate the signal, you emphasize the slight variations from random and presumably the weak signals from spirit. Most of the time, gating can be performed in software, as the first in a chain of effects. It can also be performed in hardware using a noise gate guitar pedal. Typical the noise gate works as follows. It first waits for voltage or samples above a user-defined threshold. When these "spikes" are detected, a "gate" is opened which allows sound through for a pre-determined period of time, before closing the gate. The gate can either allow sound above a second threshold or all sound during the open phase. In my scripts, I wrote my own gate for a 1.024-second clip, which first detects all of the "spikes" above a threshold, then convolves those spikes to a window function (~100 ms). The resultant window is then sample-by-sample multiplied with the original clip. In addition, if the noise-gated signal sounds too choppy because the gate is short (<50 ms), we can apply time-stretching techniques to make the spikes sound more realistic. The simplest idea is to convolve the signal with an all-pass filter, which is similar to adding reverberation (room echo). Attached is a Python script that allows you to select input and output devices, and then interactively control a noise gate on whatever noise / phoneme source you like. I also provide a Windows executable of Noise Gater for people who don't want to install Python and the dependent modules. From recent tests, I have determined that this gate is more sensitive than the Neewer noise gate guitar pedal that is often recommended. However, the software gate is not 100% real-time, it is always time lagged by 2 seconds. Finally, to get you started. I also have a Scrambled phoneme file (150 ms) that you can play with your cellphone into your PC audio input. Keep the volume low (25%) for best results. You need a mix of noise (affected by spirit) and recorded audio for gating to work, otherwise you'll just be gating the same recorded audio over and over again. noise_gater.py
  23. When I first started in ITC, I followed the strategies of the tried and true like software Ghost Boxes, but realized I could do better, a lot better... The phonetic typewriter is one of the most popular methods in use by EVP researchers today. However, other ITC researchers may not use that term. They might instead call it a Ghost Box or a Spirit Box. The general concept is that short clips of regular human speech (forward, reverse, from radio, etc.) or similar sounds are used as a base signal for spirits to "punch through" or raise the volume above a noise gate. They can also let certain clips bounce through a feedback loop of a speaker and microphone (e.g., EchoVox). In a typical ghost box, a radio quickly scans through a loop of radio stations. There are naturally periodic durations of speech/music and silence. Presumably, spirits use the audio signals or at the very least, boost the audio energy, and push the signal in different ways into the silence regions. The PC software, EVPmaker, has similar options. It can take a recorded clip of voice, and break it into small fragments and emit these fragments in random order at fixed time intervals. One of the drawbacks of the ghost box approach, is it's impossible to know what the underlying radio sounds were. For example: "Was it a coincidence that a radio station just said my name?" Therefore, some of my earliest ITC work (November 2018) was developing my own fixed recording of equally spaced randomly shuffled voice fragments from a 30-minute General David Petraeus speech to Congress, which I played from my cellphone (transmit) into my external USB audio interface (receiver). The received signal was noise gated with an FL Studio plugin called Maximus, which detected samples above a threshold and opened a noise gate for a fixed period of time (e.g. 150 ms). A closer investigation of the phenomenon showed that 20 ms pulses (band-passed spikes of energy?) showed up to lift desired fragments above my very sensitive noise gate threshold. Now if you listened to the original stream recording by itself, you could hear different random words being formed by the random ordering of 150 ms audio clips separated by 150 ms of silence. However, in the noise gated apparatus, it would sound like randomly positioned phonemes. If I set the volume of the transmitted signal low enough, the pattern that emerged each time I reset the recording was different. It appeared as though my spirit friends were typing out messages in audio from the available phonemes. Stranger still, each voice had a different characteristic and accent! Some would talk fast, almost through the clips. Others would patiently wait for the right phonemes to type out their words. It wasn't super-intelligible in real-time as I often heard things a little differently upon playing back the recorded session. Generally speaking, early on, I was picking up a European ITC team speaking to me in English. Two Germans and one Englishman. Apparently, they chose this profession in the afterlife after a career in military communications. Now they saw themselves as facilitators, not as monologuing speakers by themselves. They worked with a spirit they called the "Director." who I would later hear with a bold British female voice. They, along with the Director, appeared to be bridging connections to interested speakers and some of my ancestors. Fairly early on, my great-great grandmother, Sophie Fertle and grandfather, Alvin Lee showed up. They became regulars later on. In addition, passers-by would show up, and the technician team would explain my various setups, often with apparent enthusiasm - which encouraged me further. One particular visit helped me understand what was going on with the phonetic typewriter a lot better. A close friend from graduate school, who died very young (age 26) by a freak accident, David, showed up for just about a minute of one session. In that brief period, he was able to identify himself first and last name, where he knew me from, and say among other things the illuminating phrase: "Words are entropy." Now up to this point, I found it strange how even though I would play the same recording over and over and I would get different messages - how was this working? When I heard the phrase "words are entropy" and looked very carefully at the signal he produced, a light bulb turned on in my brain. When I played 150 ms clips with 150 ms spaces, I was essentially presenting 3 "extended phonemes" or syllables per second. Depending on which parts of the syllables the spirits pushed through, it was though they could create 2^N possible combinations per second, where N is the number of regions they could distinctly push through - I estimated roughly 6 segments per second: two halves of each syllable. Therefore, using this rough estimate, they had 64 possible expressions per second. The spirits then went on to tell me that in fact the number of possibilities was considerably higher. In addition, I was inspired to started piping two streams simultaneously, 150 ms staggered from each other. This turned into a device that made their speech a lot faster - almost rapid fire. As I've never been able to settle on any one system thus far, I also noticed another phenomenon, when the stream was played weakly enough into the USB audio interface, it sounded like the spirits were trying to talk through my audio clips. Thus began my quest to listen to their voices directly without the help of external speech patterns. Original Setup Cellphone (playing fixed recording of spaced, random syllables) -> shielded audio cable(s) -> USB input audio interface -> PC -> Maximus plugin (noise gate) in FL Studio -> USB output audio interface -> speaker. Recommended Setup For Experimenters I plan on writing a Python script that does the software steps necessary for this setup. All you will need is 1) a cellphone to play the scramble phoneme stream WAV file (we can all use the same one(s) and I'll provide that, too). 2) A PC desktop or laptop to run the Python script / executable. 3) A male to male audio cable to connect your phone to the microphone/line input of a laptop.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.