-
Posts
267 -
Joined
-
Last visited
-
Days Won
61
Content Type
Profiles
Forums
Blogs
Events
Downloads
Gallery
Store
Blog Comments posted by Michael Lee
-
-
Mike,
Thank you for your interest in trying out the software!
This weekend, I'll first try again to turn this software into an "executable" like I did with Spiricam, which makes it super easy for anyone to download and use. If that doesn't work, I'll start working on explaining the install process in more detail with some pictures or maybe a video.
-michael
3 -
I'm happy to read that the code is working for you.
I'm busy working on an improved detoning model. The script doesn't change much, if at all, but the goal is to analyze a half-second of signal and figure out the closest true speech (tm) analogue. The hard part is getting the output to sound "clean" - even if it is an imperfect guess.
A clickable link to start-up the script would be nice. I never thought of it for this code, because I'm always tweaking it in the Spyder editor.
1 -
I notice a range of quality within a single minute of recording. Maybe a few clear words, a few possible phrases, and a lot of unclear utterances. This is the life of a ITC researcher - the threshold of reality and delusion .
1 -
Manuals? Sounds like another job we'll need to outsource.
Ok, if the program is running, that's good. I'm a little concerned that you are not hearing yourself. That tells me the input device might not be the right one yet. If you move all the sliders to the left, and uncheck "Machine Learning" the output should be close to the raw input (delayed by 2.5 seconds).
When you do get the microphone picking up, the best results seem to occur when you have some sort of noise or tones in the background. Complete silence will lead to much fewer "blips" of voice.
Stepping back to the general "path of executables" like conda problem. I don't usually have a problem when I run within an Anaconda prompt/shell. The shell, I'm guessing, sets all of the path variables for "conda", "python", "spyder" etc.
Regarding saving, the way I have it setup is that it stores all of the output from the beginning, and saves to a file, when "Exit." The save location can be modified by modifying the main Python script near the end. I know it's a little clunky. This is the one program I haven't fully made "user-friendly."
0 -
You're right! Until we reach definitive messages, our perception of the messages is part of the chain. It's digitally-assisted mediumship.
1 -
Good points! Metastability/fragility, chaos/noise processes, and the energy should resemble the desired output. Fragility is also a good term to describe how our experiments work for one moment and then stop working the next. There's these very precise balance points which maximize spirit signal but also fall apart very easily.
Edit: In the stream, spirits just reminded me of another key concept: inference software. We can get all the conditions right, but then we need a little help from software to convert the signal to semi-intelligble information (e.g., voices).
1 -
Ok. The conva env create needs to be run in the directory that the environment.yml is residing.
You can move the .yml around to put it where you need to, it's only used once for that command.
0 -
I don't recommend moving any files around, but that may have helped you with scalar.
Now "conda env create --file environment.yml" should do a bunch of stuff and create a new environment called "tf25_nogpu"
If you look at the "environment.yml" with a text editor it should specify somewhere "tf25_nogpu" as the "title" and then list all of the packages it will download and install when "created."
This environment will then support the "ITC translator."
If you're not getting "tf25_nogpu" to list as an available environment after doing the "create", then "hmmm."
conda env list
Now another trick to know is as long as you don't modify the "base" environment, you can always remove other environments and try building them again with first "conda env remove -n <name of broken environment>"
Once again, moving files around shouldn't be necessary. If all else fails we can do a "Zoom" to see what's going on.
-michael
0 -
Right. I use something called the Anaconda Prompt Shell, to enter the command-line. Python and conda are both available, then. All this is good for the instructions.
1 -
The environments are all stored in a central location. "Activate" sets all the correct paths for python, spyder, etc. My program isn't affected by those path variables, but, of course, the libraries that it calls are affected.
1 -
Spyder is a headache with this new environment. It had to be installed via pip, not conda, so there's no simple shortcut in Windows. Each time I load up Spyder, I have to activate the environment in the Anaconda shell, first, then type spyder.
0 -
The environment.yml file is used to build the environment for the first time (and then that installation is done).
But then that environment needs to be "activated" (or selected) to use: conda activate tf25_nogpu
0 -
File "C:\Users\User\anaconda3\envs\tf21_nogpu\..." tells me you didn't first "conda activate tf25_nogpu"
There's definitely compatibility issues with different Tensorflow versions, so you have to be currently in an environment that's running tensorflow 2.5, to run the models I provided.
0 -
Kevin - Miniconda, which is a smaller version of Anaconda, should be sufficient.
https://docs.conda.io/en/latest/miniconda.html#windows-installers
Miniconda installs the basic Python library management tools, and then my instructions will lead to a download of the Python libraries that my program needs. I tried the process on my daughter's Windows computer and the whole process was fairly painless. The only problem was when I want to remove Miniconda (to clear her computer) I accidentally uninstalled Minecraft, instead. Needless to say, I'm now in big trouble!
1 -
Andres and Kevin: You are two of the "alpha-testers."
Download the code from the link, when you get a chance, and have a fire extinguisher handy!
0 -
I think it's all real. I mean I wouldn't being working so hard if I didn't think it was. Proof is hard, though. I've tried a little, like having a third party ask me to get information from my stream. But not enough - I hope others can help.
There's a few challenges. One is that you want messages to be clear enough such that two or more ppl can agree. Another challenge is for the messages to be long enough to be useful. A message like "remove the wire" isn't enough to make earth- shattering results. Third, I question the extent to which spirit knows all the answers. They might have solutions, but those might require tools or expertise I/we don't yet possess.
I feel like I maintain a connection both in ITC and mentally with Tesla. He seems to be the most avid researcher in this area There. Time will tell if it's an elaborate illusion or reality.
3 -
I do need to publish an ML version! It won't be plug & play like Spiricam, but it should be useable for the adventurous.
1 -
Two more quotes for today, using a different FPGA design, but same ML software.
"the transmitter"
"they clearly spin...circles"
3 -
Correct on the specifications. I'm using same "translator." I try to make my models as universal as possible and reuse the same ones for all of my experiments. My latest "musical tone" reverser uses as input the whole spectrogram, followed by sparsification (1 to 20 up to 1 to 80) and then reconstitutes the remaining "dots" as 60 ms tones. This provides a fast "tone-ification" of voice that I then train an ML model to reverse. This model can be used with either a tone-based signal (like the one in this thread) or a strong spectral subtraction of a noise signal.
For fun, this is the one legible phrase from my 1-minute recording this morning:
I hear "This is divine providence."
4 -
Some of them remind me a robot dog barking. The 0:06 remind me of a 60's sci-fi TV show.
2 -
Chicken Morse Code. What's the recipe?
2 -
That's a really good point, at best I have a linear space of 6 string waveforms for them to form sounds. One my earliest visions, showed I think 7 or 9 lines, so maybe it's enough, especially if you sprinkle some special ML sauce
1 -
Try putting your hand over the tube, to get a "waah" sound, then if that works, figure out some sort of valve lid you could make, controlled by a second buzzer.
1 -
I agree, we're trying to make artificial humans for spirits to affect. Woops did I say that? The electrical hum may have an additive effect since the guitar strings have a limited spectrum. I don't think there's high frequencies in the strings to make complete formants, or I need some careful EQ effects in the chain to bring out the HF.
1
Before I introduce another steampunk method, let me hypothesize some ITC principles
in Michael's Portal Station
A blog by Michael Lee in Instrumental Transcommunication (ITC)
Posted
Mike: there are many variations of filters and noise sources you can use. Some ppl use voice or voice like audio. As long as you can distinguish between what you think is deterministic or should have happened vs. What actually happened, which might have been a message from spirit. Use your intuition, as that's part of the process, until we get such clear messages that there is no doubt.