Jump to content

Michael Lee

Team
  • Posts

    267
  • Joined

  • Last visited

  • Days Won

    61

Everything posted by Michael Lee

  1. Mike: there are many variations of filters and noise sources you can use. Some ppl use voice or voice like audio. As long as you can distinguish between what you think is deterministic or should have happened vs. What actually happened, which might have been a message from spirit. Use your intuition, as that's part of the process, until we get such clear messages that there is no doubt.
  2. Mike, Thank you for your interest in trying out the software! This weekend, I'll first try again to turn this software into an "executable" like I did with Spiricam, which makes it super easy for anyone to download and use. If that doesn't work, I'll start working on explaining the install process in more detail with some pictures or maybe a video. -michael
  3. Jeff: Was this a direct transcription of the reversed audio or from a microphone picking up the ambient sound + speaker playing the reverse audio? In any case, I'm OK with software as sensitive as this type of transcription because you can use other metrics like word counts to verify if the text is random or coming from a spiritual source.
  4. Having a speech-to-text like is very useful. I'm not quite ready to pay for the monthly subscription, but I appreciate your testing. Regarding the audio, it sounds like a feedback echo may be occurring, where your microphone is picking up from the speaker. Anytime I use a microphone, I wear headphones. Even then, a sensitive enough microphone can hear headphones
  5. I'm happy to read that the code is working for you. I'm busy working on an improved detoning model. The script doesn't change much, if at all, but the goal is to analyze a half-second of signal and figure out the closest true speech (tm) analogue. The hard part is getting the output to sound "clean" - even if it is an imperfect guess. A clickable link to start-up the script would be nice. I never thought of it for this code, because I'm always tweaking it in the Spyder editor.
  6. I notice a range of quality within a single minute of recording. Maybe a few clear words, a few possible phrases, and a lot of unclear utterances. This is the life of a ITC researcher - the threshold of reality and delusion .
  7. Manuals? Sounds like another job we'll need to outsource. Ok, if the program is running, that's good. I'm a little concerned that you are not hearing yourself. That tells me the input device might not be the right one yet. If you move all the sliders to the left, and uncheck "Machine Learning" the output should be close to the raw input (delayed by 2.5 seconds). When you do get the microphone picking up, the best results seem to occur when you have some sort of noise or tones in the background. Complete silence will lead to much fewer "blips" of voice. Stepping back to the general "path of executables" like conda problem. I don't usually have a problem when I run within an Anaconda prompt/shell. The shell, I'm guessing, sets all of the path variables for "conda", "python", "spyder" etc. Regarding saving, the way I have it setup is that it stores all of the output from the beginning, and saves to a file, when "Exit." The save location can be modified by modifying the main Python script near the end. I know it's a little clunky. This is the one program I haven't fully made "user-friendly."
  8. You're right! Until we reach definitive messages, our perception of the messages is part of the chain. It's digitally-assisted mediumship.
  9. Good points! Metastability/fragility, chaos/noise processes, and the energy should resemble the desired output. Fragility is also a good term to describe how our experiments work for one moment and then stop working the next. There's these very precise balance points which maximize spirit signal but also fall apart very easily. Edit: In the stream, spirits just reminded me of another key concept: inference software. We can get all the conditions right, but then we need a little help from software to convert the signal to semi-intelligble information (e.g., voices).
  10. Ok. The conva env create needs to be run in the directory that the environment.yml is residing. You can move the .yml around to put it where you need to, it's only used once for that command.
  11. I will introduce some new "steampunk" or acoustic ITC methods in the next post. But first I want to share with you some theories I have about audio ITC. To me, reception of spirit / interdimensional signals has at least three components: 1) Sensitivity to the signal 2) Resonant modes of the detector 3) Driving energy Sensitivity means that whatever spirits can use to communicate with us, like virtual photons, wavefunction selection, or whatever, our devices can pick up these changes/anomalies. The most traditional detectors people have used are microphones and scratchy diodes. Presumably, the microphone picks up small air pressure changes and or electromagnetic signals affecting its inductive coil. The diode could be picking up radio waves, scalar waves, thermal changes, etc. In any case, every ITC detector has some sort of sensitivity. Detectors can be virtually anything, like water or even a hard rock. But as long as we can percieve (humanly or electronically) changes in that detector, then it should work. The question though, is how sensitive to spirit is that detector compared to others. That, I don't have an answer, but we can certainly select our favorites for experimentation based on perceived improvements or ease of use. Resonant modes refers to the available states of the detector or broader physical ITC system. It can be thought of as the frequency spectrum of physical and non-physical signals emanating from a given device. For example, some devices have two states. They either have short "pops" or nothing at all. Some have pops of differing duration and amplitude. Some devices emit a constant white noises. Others could have certain dominant tones like wind chimes. Even others could have a dominant on/off buzzing sound like some of Andres' creations. In each case, there's an "available" set of frequencies that can be produced. Obviously, if we wanted to hear a perfect human voice, the device would need to emit all of the frequencies between a range of approximately 100-8000 Hz. Devices that emit white noise, sound great for this challenge, but often suffer from overdoing it in the last factor... Driving energy refers to how much our device is physically stimulated. A great example is the work of Anabela Cardoso. She finds that a microphone with noise playing in the background is much better than a microphone in a completely silent room. The added noise is "driving energy." It is both a source of energy for the spirits to manipulate and it ruffles up the air molecules in the room providing a "canvas" for spirit signal implantation. But too much driving energy may not be such a good thing. If I play a super loud buzzing sound (to represent the human glottal voice pattern), we're not going to hear any variations in that buzz, unless we use some pretty serious noise cancellation software. Meanwhile, if I supply a light amount of buzz, the variations may begin to be noticeable to the human ear. Here's another "overdrive" situation: radio static. Radio static when evaluated with a spectrogram looks as random as can be. You have to apply a lot of software noise removal to extract out anomalous signals. I would argue that too much noise makes the filtering process more difficult than it needs to be. One way people balance out the noise is by playing it over speaker to be picked up by second microphone. Ok, enough rambling about theories. In my next post, hopefully, I will have some interesting results to share.
  12. I don't recommend moving any files around, but that may have helped you with scalar. Now "conda env create --file environment.yml" should do a bunch of stuff and create a new environment called "tf25_nogpu" If you look at the "environment.yml" with a text editor it should specify somewhere "tf25_nogpu" as the "title" and then list all of the packages it will download and install when "created." This environment will then support the "ITC translator." If you're not getting "tf25_nogpu" to list as an available environment after doing the "create", then "hmmm." conda env list Now another trick to know is as long as you don't modify the "base" environment, you can always remove other environments and try building them again with first "conda env remove -n <name of broken environment>" Once again, moving files around shouldn't be necessary. If all else fails we can do a "Zoom" to see what's going on. -michael
  13. Right. I use something called the Anaconda Prompt Shell, to enter the command-line. Python and conda are both available, then. All this is good for the instructions.
  14. I know our French Varanormal colleague (itc-station.org) can write in Javascript. I have no idea what it would cost. I imagine, for the number of people who would experiment with it at first, it might be cheaper to buy the license to a program that does something similar - likely with multiple sweeps at once.
  15. 1. The program should be able to run on a 386. 2. Probably the easiest solution would be for someone to rewrite it in Javascript, where it could be then run simply on a mobile browser. 3. I was using a program called PyInstaller. This worked for Spiricam, but produces no error messages for me to debug when try running the compiled executable on a 2nd Windows computer. This program is arguably simpler than Spiricam, so someone with a little patience and MS Visual C++ could reprogram it and make the executable that way.
  16. Version 0.1

    12 downloads

    I wrote a program to generate sweeping up and down tones, with user controls. What I wasn't able to do was get this code into executable form for simple installation. So, you'll need to install Miniconda (Python 3.8, 64-bit), then go into the uncompressed directory and type conda env create --file scalar.yml Then, to run the program: conda activate scalar python ./scalar_wave.py The GUI should be self-explanatory. If you have any questions or issues, post them below. -michael
  17. The environments are all stored in a central location. "Activate" sets all the correct paths for python, spyder, etc. My program isn't affected by those path variables, but, of course, the libraries that it calls are affected.
  18. Spyder is a headache with this new environment. It had to be installed via pip, not conda, so there's no simple shortcut in Windows. Each time I load up Spyder, I have to activate the environment in the Anaconda shell, first, then type spyder.
  19. The environment.yml file is used to build the environment for the first time (and then that installation is done). But then that environment needs to be "activated" (or selected) to use: conda activate tf25_nogpu
  20. File "C:\Users\User\anaconda3\envs\tf21_nogpu\..." tells me you didn't first "conda activate tf25_nogpu" There's definitely compatibility issues with different Tensorflow versions, so you have to be currently in an environment that's running tensorflow 2.5, to run the models I provided.
  21. Kevin - Miniconda, which is a smaller version of Anaconda, should be sufficient. https://docs.conda.io/en/latest/miniconda.html#windows-installers Miniconda installs the basic Python library management tools, and then my instructions will lead to a download of the Python libraries that my program needs. I tried the process on my daughter's Windows computer and the whole process was fairly painless. The only problem was when I want to remove Miniconda (to clear her computer) I accidentally uninstalled Minecraft, instead. Needless to say, I'm now in big trouble!
  22. Sharon, Sorry your post didn't get responded to earlier. Must be the universal interference Yes, it's consistent with what we know about spirit communication that it can be enhanced when our energies are activated. I can try to enhance the speech, too. -michael
  23. Andres and Kevin: You are two of the "alpha-testers." Download the code from the link, when you get a chance, and have a fire extinguisher handy!
  24. Since early 2019, I have been working on software to extract voices from physical noise/signals. My earliest attempts used other people's software, mainly an algorithm called "spectral subtraction." in a ReaFir noise reduction plugin. This converts the noise into the frequency spectrum, where slight imprints of voice can be discovered and emphasized. We now enter the year 2022 - Spectral subtraction is still a very valuable tool, but it is only the beginning of a process I've developed for extracting voices. I've created machine-learning-based models to find and emphasize voices. I've also made a program that finds and generates "formants" or peaks in the harmonic buzz of the human voice. I'm finally releasing my full software, in Python. I use a very similar version of) this code in all of my experiments (FPGAs, radio noise, etc.) I would've liked to have shared it as an executable, like I did Spiricam, but Python executable-makers are notoriously buggy. Another reason I've hesitated is=n sharing the code sooner is that it used to require some heavy GPU resources. However, thanks to some software developments by Google, my ML models seem to run OK on the CPU pretty well in real-time. So if you want to try out my code, you'll have to do some command-line steps and you'll have to at minimum install a free program called Miniconda, or a larger version called Anaconda with Python version 3.8, 64-bit. Maybe a few GBs of disk storage will be required. Here's the link to the code: https://drive.google.com/drive/folders/1fu6hAuE0AbhbQjx0Ts_3Ju0QRJ0awxRM?usp=sharing In the directory is a README.txt, which I'll update as we iron out the instructions. When I've resolved most of the common issues, I'll make the code into a ZIP file for the Downloads sections. For now, feel free to ask questions in the comments. As I like to say "The spirits are waiting!"
  25. I think it's all real. I mean I wouldn't being working so hard if I didn't think it was. Proof is hard, though. I've tried a little, like having a third party ask me to get information from my stream. But not enough - I hope others can help. There's a few challenges. One is that you want messages to be clear enough such that two or more ppl can agree. Another challenge is for the messages to be long enough to be useful. A message like "remove the wire" isn't enough to make earth- shattering results. Third, I question the extent to which spirit knows all the answers. They might have solutions, but those might require tools or expertise I/we don't yet possess. I feel like I maintain a connection both in ITC and mentally with Tesla. He seems to be the most avid researcher in this area There. Time will tell if it's an elaborate illusion or reality.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.