Jump to content

Andres Ramos

Moderators
  • Posts

    1,242
  • Joined

  • Last visited

  • Days Won

    201

Blog Comments posted by Andres Ramos

  1. Wow, this is a very good article Fernando. It made me think a lot. What I find so appeasing about the Bayesian knowledge system is that it is always open for new  insights. This raises the question why science is so much focused on gaining bullet proof safety in the adoption of knowledge. In my eyes this attitude mainly is used to draw red lines. Everything that is before the red line is officially accepted knowledge and basically not questionable anymore. People questioning this truth are merely ignores. Knowledge behind the red line is more vague knowledge. it is seen as valuable to continue researches in these areas. It is questionable in my eyes if this process is fruitful in the quest of finding truth or rather on the fringe of a belief system itself.

    The second thing that I feel very comfortable about the Bayesian system is that evaluating pieces of truth by the probability of being really true is more near to practical life instead of absolute acceptance or rejection of knowledge. In daily life we are constantly confronted with problems and decision where we don't know exactly how things will turn out at the end. Dealing with probabilities is a very common behavior we e.g. apply unconsciously if we do risk assessments. A very interesting outcome of this consideration is that the original question "What is truth?" is replaced by the more practical one, "If this would be true, what would that mean for me?". We would absorb the previously rather objective question with our personality. Maybe then we would decide to follow a spiritual path just y considering that, if it's true we would gain so much and if not we would loose only a little.

  2. 3. A combination of 1 and 2.

    Do you remember those throttle valves in the old combustion engines having a carburetor? It was a disk with the size of the inner diameter of the air intake tube. The disk was fixed to a shaft going through the tube walls. By turning the shaft by an angle between 0 and 90° you could change the tube opening between fully closed to fully opened. In our case this would sweep the resonance frequency as well. The shaft could be directly driven by a motor and the motor modulated by a power transistor. In my eyes this is the most promising approach.

     

  3. 6 hours ago, Keith J. Clark said:

    Andres, dare we say you invented another hand instrument? It might be monotone but there's potential 🙂

    perhaps a way to modulate without a hand would be to setup a fan blowing towards the tube? How could we randomize something like this? 

    Hm yes, you're playing with the idea of a mechanical cavity resonator that can tuned quickly by a randomized signal source. A tricky challenge that requires a bit of mechanical work. Basically I see the following options.

    1. Changing the resonators volume

    Let's say we take a piece of a cardboard tube like from a toilet paper roll. We put a loudspeaker or some kind of other mechanical vibrator at one end that is sealed by a diaphragm. In the cardboard tube Wall we cut a half inch hole to let out the sound. The second end of the tube gets closed by a moving piston that is driven by some motor or so. The piston should be made from styrofoam to minimize it's mass in order to move it quickly. For first tests we could use the motor with something like a crankshaft to move the piston back and forth. The result would be a quick sine sweep of the resonance frequency. If this works we must make up our minds to control the piston freely.

    2. Making a flute like resonator

    Again we could take a cardboard tube with diaphragm and vibrator/loudspeaker at one end. Second end is closed by a lid with a small hole in it to let out the sound. Also we cut a hole in the wall but this time it is closed by a piece of rubber attached to a motor shaft or solenoid. These can be used to more or less close/uncover the hole. This will change the pitch of the resonance frequency like with a flute.

    For the vibrator I would recommend a standard buzzer that generated a signal close to glottal impulses.

     

  4. 3 hours ago, Alain said:

    Bonjour Andres

    Personnellement, je n'entends pas la même chose (Je suppose que c'est juste de la paréidolie auditive ce que j'entends) 😉

    Fast blinken / me: "Sortez vos propres boîtiers

    My woman / On voudrais du .... (je ne comprends pas la fin)

    One hole / Porter un coups de crosse

    Too much is harmful / Bonne chance à côté

    Too much / Macho ou Macha

    Interesting Alain. I have the same problem the other eay around and can't hear what you heard. There is a theory we sometimes discuss in the research team that our perception is the last stage where the information becomes finalized. Only after our perception the information transfer is completed. This means that the same recording can manifest into different messages in the ears of different people. Sometimes I think this is true because the perceptions are so different.

  5. 2 hours ago, Andres Ramos said:

    Today I reinstalled python. I trashed it while trying to start your app by creating a direct link from my desktop. Reinstalled also the tf25_nongpu environment. It worked smoothly so the installation process is stable due to my experience.

    I'm actually testing your app with phototransistor noise sources. One setup I'm working on seems to be fairly promising. In my tests I found out the importance of the low threshold and the tone threshold parameter. I really could tweak your sw by carefully adjusting them. By the way, the automatic storing and retrieving of parameter settings is really nice!

    Wow that sounds like a very sophisticated improvement. I'm curious! In the meantime I will try to squeeze out what is possible from your code after I finished the latest phototransistor design before I will continue with Sonias Lightbridge device manufacturing.

    If I should find an easy to use solution for a one click start I will let you know.

  6. Today I reinstalled python. I trashed it while trying to start your app by creating a direct link from my desktop. Reinstalled also the tf25_nongpu environment. It worked smoothly so the installation process is stable due to my experience.

    I'm actually testing your app with phototransistor noise sources. One setup I'm working on seems to be fairly promising. In my tests I found out the importance of the low threshold and the tone threshold parameter. I really could tweak your sw by carefully adjusting them. By the way, the automatic storing and retrieving of parameter settings is really nice!

    Moreover I am curious if I could create a link to start your app inside the anaconda prompt by using windows PowerShell.

     

  7. 3 hours ago, Michael Lee said:

    You're right! Until we reach definitive messages, our perception of the messages is part of the chain. It's digitally-assisted mediumship.

    Yes. Of course this sheds a completely new light on the objectivity of messages and how to separate them from pareidolia. Maybe the red line is not running between reality and pareidolia, maybe it just separates useful pareidolia from useless? 🥴

  8. Exactly! Apart from some very good direct microphone voices we always need software to process the raw signals to even get the chance to distill the original information from it.

    However, sometimes I think that the the sw processing is not just about extracting hidden information from the recordings but even more that sw is part of the pk effect and thus one factor in the signal transformation chain that starts in the hereafter and ends in our world. Moreover I frequently come to the point that even our perception is part of the transformation chain and that the 'objective' messages is shaped just finally during our perception. But I'm not really sure about this...

  9. I like this clean conceptual approach you are presenting here, Michael. Very well done!

    I'd like to mention two other possible ITC-components. The first one is fragility. IAt first sight I know fragility seems similar to sensitivity however I think fragility offers something that sensitivity does not show. A sensitive detector detects the desired signal but usually adds noise to the amplification process.  In a fragile system an energy quantum coming the hereafter can be multiplied by avalanche effects when the detector system is flipping from one state into another. In this case the signal can outstand the noise ground floor sometimes. I observed this behavior very much with coherers and also your experiments with the whistler you used an operational amplifier in a weak state of feedback, if I remember correctly.

    Another thing is utilizing chaotic behavior of systems. We see this in white or pink noise, all kind of uncontrolled feedback, the complex impulse patterns in the VISPRE and many more. I suppose the usability of those effects is lying in the fact that everything is already "moving". Spoken literally,  it certainly takes more energy to set something in motion as affecting something that already is in motion.

    I also would like to add something to your description of driving energy. Spoken very generally this energy already should appear in the shape you want the resulting signal to be. We observe this very much with microphone voices. I often got messages that had the shape or characteristic of the background sound I was providing. As I was typing on my keyboard in the background I got voices with clicking sounds while pink noise makes spirit voices sound croaky.

  10. Today I found a way zo run your program. First I tried to start python from a windows console with your program as a parameter but Windows didn't know the path yo python. Then I started Anaconda, entered the tf25_nongpu environment, opened a console in Anaconda and started python with your program. Et voilà, it works!!!!

  11. 12 hours ago, Michael Lee said:

    Spyder is a headache with this new environment. It had to be installed via pip, not conda, so there's no simple shortcut in Windows. Each time I load up Spyder, I have to activate the environment in the Anaconda shell, first, then type spyder. 🥴

    Ok, so it's better to start your app directly from a command console with phython? When I do this is the respective folder where the application resides, am I in the correct environment automatically?

     

  12. 37 minutes ago, Michael Lee said:

    The environment.yml file is used to build the environment for the first time (and then that installation is done).

    But then that environment needs to be "activated" (or selected) to use: conda activate tf25_nogpu

    Yes and so I did. However I ran the application in Spyder and I cannot rule out that I started Spyder in another environment. Will check this tomorrow.

  13. Hi Michael! I installed everything according to your very good description. Now I got the following erro while starting "itc_translator.py"

     

    runfile('C:/Users/User/Documents/Code/Python/Michaels ML/itc_translator.py', wdir='C:/Users/User/Documents/Code/Python/Michaels ML')
    Traceback (most recent call last):

      File "C:\Users\User\AppData\Local\Temp/ipykernel_15308/3073042103.py", line 1, in <module>
        runfile('C:/Users/User/Documents/Code/Python/Michaels ML/itc_translator.py', wdir='C:/Users/User/Documents/Code/Python/Michaels ML')

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\debugpy\_vendored\pydevd\_pydev_bundle\pydev_umd.py", line 167, in runfile
        execfile(filename, namespace)

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\debugpy\_vendored\pydevd\_pydev_imps\_pydev_execfile.py", line 25, in execfile
        exec(compile(contents + "\n", file, 'exec'), glob, loc)

      File "C:/Users/User/Documents/Code/Python/Michaels ML/itc_translator.py", line 355, in <module>
        custom_objects={'alpha':alpha})

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\tensorflow_core\python\keras\saving\save.py", line 146, in load_model
        return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\tensorflow_core\python\keras\saving\hdf5_format.py", line 166, in load_model_from_hdf5
        model_config = json.loads(model_config.decode('utf-8'))

    AttributeError: 'str' object has no attribute 'decode'

    Error in callback <bound method AutoreloadMagics.post_execute_hook of <autoreload.AutoreloadMagics object at 0x000002255E30D448>> (for post_execute):
    Traceback (most recent call last):

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\IPython\extensions\autoreload.py", line 538, in post_execute_hook
        _, pymtime = self._reloader.filename_and_mtime(sys.modules[modname])

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\IPython\extensions\autoreload.py", line 184, in filename_and_mtime
        if not hasattr(module, '__file__') or module.__file__ is None:

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
        module = self._load()

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\site-packages\tensorflow\__init__.py", line 44, in _load
        module = _importlib.import_module(self.__name__)

      File "C:\Users\User\anaconda3\envs\tf21_nogpu\lib\importlib\__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)

      File "<frozen importlib._bootstrap>", line 1006, in _gcd_import

      File "<frozen importlib._bootstrap>", line 983, in _find_and_load

      File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked

    ModuleNotFoundError: No module named 'tensorflow_core.estimator'

     

  14. 2 hours ago, Michael Lee said:

    Kevin - Miniconda, which is a smaller version of Anaconda, should be sufficient. 

    https://docs.conda.io/en/latest/miniconda.html#windows-installers

    Miniconda installs the basic Python library management tools, and then my instructions will lead to a download of the Python libraries that my program needs. I tried the process on my daughter's Windows computer and the whole process was fairly painless. The only problem was when I want to remove Miniconda (to clear her computer) I accidentally uninstalled Minecraft, instead. Needless to say, I'm now in big trouble! 😛

    Yes you are but you made the world a little better now! 🤣

  15. 38 minutes ago, Michael Lee said:

    Andres and Kevin: You are two of the "alpha-testers."

    Download the code from the link, when you get a chance, and have a fire extinguisher handy! 😉 

    I have my ectoplasm thrower ready. 🤪

    Since I already have Anaconda running I can directly jump into testing. I bet I will have some troubles with the phython libs as usual.

    Will check it out tomorrow.

  16. 22 hours ago, Michael Lee said:

    Two more quotes for today, using a different FPGA design, but same ML software.

     

    "the transmitter"

     

    "they clearly spin...circles"

     

     

     

    I agree with Kevin. The results of your ML are very impressive in combination with your FPGA! Very well done Michael.

    From what I remember the phython application you published and I installed does not contain ML. Could you publish the actual version with ML?

  17. These are impressing results Michael. If I got you right you each RO is made of 101 gates and you are combining 16 of these RO's for the controlling of one tone  by xoring them?

    The voices are very clear. Are you using the same ML software you published in the community or is it trained solely to the signal of your musical tones?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.