Jump to content

Michael Lee

Team
  • Posts

    267
  • Joined

  • Last visited

  • Days Won

    61

Posts posted by Michael Lee

  1. Jeff: Was this a direct transcription of the reversed audio or from a microphone picking up the ambient sound + speaker playing the reverse audio?

     In any case, I'm OK with software as sensitive as this type of transcription because you can use other metrics like word counts to verify if the text is random or coming from a spiritual source.

     

  2. Having a speech-to-text like is very useful. I'm not quite ready to pay for the monthly subscription, but I appreciate your testing. Regarding the audio, it sounds like a feedback echo may be occurring, where your microphone is picking up from the speaker. Anytime I use a microphone, I wear headphones. Even then, a sensitive enough microphone can hear headphones 🙂

     

  3. Hi Robert!

    The quantum selection effect could be a mechanism of how spirit influences things (electronics and our physical bodies). If so, one would want to prepare as many metastable states as possible for the spirits to influence. An analogy is the spirit knocking over bowling pins. One could count the number of pins falling over per unit time or the measure direction they fell. 

    One of the biggest problems I've noticed, and I've explored not just audio, but images, and digital ITC, is how weak the spirit signal actually is. This means that we need sort of a paradoxical situation. Quietness from the physical environment but also metastability/ease of knocking over. Typically, sensitive sensors pick up everything around them.

     

  4. On 11/1/2021 at 11:51 AM, Fernando Luis Cacciola Carballal said:

    Hi Michael,

    This program is awesome!

    And the presentation Friday night was great.

    I've found a sort of small unwanted feature (we programmers don't say the word bug): if you change the device after it started, it won't do anything. You have to stop it first, then re-start it.

    Other than that it works like a charm. Haven't see anything yet, but, I'll keep fishing.

     

     

    The new version v0.81 automatically pauses the recording, if a new camera is selected during recording.

  5. On 10/31/2021 at 10:07 AM, Paranormal Reason said:

    Hi Michael, I downloaded V 0.74 yesterday, but I've just unzipped the files and gone to install it when I got hit with the usual warning message - it's telling me that it contains the HEUR/APC Malware threat.

    Has anyone experienced this before and is it a known false flag, or is it something else?

    I'm running Windows 10 and, am using Avira Security. Not sure if any additional information will be of help to you?

    I've popped the file in quarantine for the moment, any help or guidance you could provide would be gratefully received.

    Kind Regards
    Mark D.  

    Some malware detectors don't like Python executables and all of the libraries. From what I've Googled, HEUR=heuristic - it's guessing that one of the libraries might be evil. But people say it shouldn't be a problem.

    Sincerely,

    the Malware on Michael's computer. 👹

  6. 2 hours ago, Andres Ramos said:

    Maybe this exactly is what Bigelow wanted. Still I'm not quite clear about the intention of this contest or why the world is a better place now after Bigelow did it.

    Here's hoping the next $1 million is devoted to a call for research proposals in the paranormal / ITC field. Maybe I'll finally be able to purchase a zero Kelvin quantum camera 🤩.

  7. Welcome!

    In my opinion, ITC, for now, walks a fine line between real and imagined because the signals that we are trying to discern are often very weak. 

    Feel free to share with us any of your results and ideas on the forum.

     

     

  8. Having broken free of audio ITC (for now), I've tried some visual ITC and developed Spiricam https://www.varanormal.com/forums/topic/1121-spiricam-combining-camera-input-and-a-touch-of-image-processing/?do=getNewComment.

    But if you really want to start quantifying whether spirit is having an effect on ITC, digital ITC is both the most rigorous and most difficult to figure out.

    I've tried a few strategies over the years, but I've converged on the idea of expecting a certain rate of symbols, such as 1 symbol per second, and using some reasonably random (but physical) noise source - for example, the line-in input of a laptop.

    In a previous post,  I mentioned a parity test, where we are receiving 0s and 1s, but the number of 1s should be even per 8-bits/1 byte. This would be a "parity" or error check mechanism. Of course, a more complicated scheme could be used, but we'll start with this. 

    Now as far as encoding characters, I'm opting at first to try ASCII, which is a standard 7-bit set (0-127) used in the US, but has most of the common symbols of English, the 10 digits and some punctuation and math symbols.

    How to convert physical noise into bits. There's many strategies out there. I want one that achieves some amount of randomness without forcing it and losing information that might be coming through.

    If you listen to the line-in audio, with nothing connected, it predictably sounds like noise + ground hum. So, I've decided to apply spectral subtraction, which should remove any periodic tone interference and also a layer of physical noise.

    Then, if I'm collecting one bit per 125 ms (8 bits per second), I sum up the processed audio samples for each bit, if it's above zero, I call the bit a "1" otherwise the bit is a "0".

    Here's a snippet of results, so far. I just started running it an hour ago:

    ,$,b,..,Z,,,,.,,,,,H 0.511
    ,,,.,Du2,,,Zm,Mw&&,, 0.513
    ==,,J.,,>,,1|,U(gT,, 0.515
    #.,,],\,bf..,444,,., 0.518
    ,`C+,,.,2V,.^,S,,,,, 0.515
    ,,,,#,j,,,oV.D,,,,,M 0.511
    .VT, N)lZn,1, ,,,,,, 0.512
    ,.,Q,.,JP-.y.n, ,,., 0.516
    ,9,,,eR,..,,,V,.,,,, 0.510
    ,.,,.-,u@|u,;{.O;,j, 0.515
    ,g ,m,k,15v,,Og,|,Iu 0.517
    #,.{,j,.,Y,,.,{!,{m. 0.520
    .],,,=.N,,`,fB,,,.,, 0.518
    8,,),.,.,,.,,.,,,;.- 0.516
    i.8.,,/..oCT.,,,G,,, 0.518
    ,=/,.,,to,,,,>.,Z(,Y 0.518

    The right number column gives a running tally of fraction of bytes that have even parity.

    The symbol "." is used to represent the first 32 symbols, which are not displayable.

    The symbol "," is used to represent bytes that have odd parity, and therefore, we want to exclude.

    And here's the Python code:

    # -*- coding: utf-8 -*-
    """
    @author: Michael S. Lee (10/30/2021)
    """
    
    import numpy as np
    import pyaudio
    import time
    import keyboard
    from scipy.signal import stft, istft
    
    # Convert byte value to character
    def convert(byte):
      if ((byte < 32) or (byte == 127)):
          char = '.' # invisible character
      else:
          char = chr(byte)
      return(char)
      
    def callback(data, frame_count, time_info, status):
      
      global index, bias, line, parity
    
      # Extract frame of audio from pyaudio buffer     
      frame0 = np.copy(np.frombuffer(data,dtype='float32'))
      
      # Normalize audio
      frame = 0.05 * (frame0-frame0.mean()) / frame0.std()
      
      # Simple spectral subtraction to remove interference and noise
      f,t,F = stft(frame, nperseg = 256)
      G = np.abs(F)
      G1 = np.copy(G)
      P = F / (G + eps)
      mean = G.mean(axis=-1)
      for ii in range(G1.shape[1]):
          G1[:,ii] = np.maximum(G1[:,ii] - mean*spec,0.0)
      t, frame1 = istft(P*G1)
      
      # Normalize audio
      frame1 = 0.05 * (frame1-frame1.mean()) / frame1.std()
      
      bit = np.zeros((8),dtype='uint8')
      for i in range(nbits):
          block =  frame1[i*nblock:(i+1)*nblock]
          bit[i] = (block.sum() > 0) * 1
         
      # Save character only if parity is even
      if (np.mod(bit.sum(),2)==0):
        byte = bit[0] + bit[1] * 2 + bit[2] * 4 + \
               bit[3] * 8 + bit[4] * 16 + bit[5] * 32 + \
               bit[6] * 64
        char = convert(byte)
        parity = parity + 1
      else:
        char = ',' # parity was odd
    
      #bias = lambda1 * bias + (1.0 - lambda1) * (frame1.sum() / 8.0)
      #std = frame1.std()
     # print (bit, bit.sum(), "%5.2E" % bias, "%5.2E" % std)
     
      # Print and add next character
      print (char, end="") 
      line = line + char
      index = index + 1
      
      # Save line to text file
      if (np.mod(index, nchar) == 0):
          frac = '%4.3f' % (parity / index)
          save_file.write(line+" "+frac+"\n")
          print (" "+frac)
          line = ""
      
      # Play sound of spectrally subtracted audio
      # can be multiplied by zero to be quiet
      data2 = np.ndarray.tobytes(frame1 * 0.0)
      return (data2,pyaudio.paContinue)
    
    # Sampling rate
    fs = 16000
    
    # Audio block size, could be a second, 
    # could be shorter/longer.
    nframe = fs
    print (nframe,'samples per frame')
    print ("------------------------")
    
    # Index of all frames
    index = 0
    
    # #bits per frame
    nbits = 8
    
    # Bias - starting value
    bias = 0.0
    
    # lambda1 - update rate of bias
    lambda1 = 0.9
    
    # strength of spectral subtraction
    spec = 1.75
    
    # block size - needs to be integer
    nblock = nframe // nbits
    
    # #chars per line
    nchar = 20
    
    # small number
    eps = 1e-8
    
    # Accumulating line of characters
    line = ""
    
    # Keep track of parity fraction
    parity = 0
    
    # Audio device index
    microphone = 3
    output = -1
    
    # Give file a unique name
    recordfilename = time.strftime("%Y%m%d-%H%M%S")+'.txt'
    
    # Text file to store stream
    save_file = open(recordfilename,'w')
    
    # Start audio stream
    p = pyaudio.PyAudio()
    stream = p.open(format=pyaudio.paFloat32,
                    channels=1,  # mono
                    rate=fs,
                    output=True, # We're just generating text
                    input=True,  # Audio input source
                    input_device_index=microphone,
                    output_device_index=output,
                    frames_per_buffer=nframe, # 1-second
                    stream_callback=callback)
    
    # Wait for key press
    while True:
        if keyboard.read_key() == "q":
            print("\nUser break.")
            break
        time.sleep(1)
    
    # Close stream and text file
    stream.close()
    save_file.close() 

     

     

  9. The program has a few machine learning models to reverse things like additive noise, and quantization artifacts. Also, it has a primitive "speech model" that adds formants it finds to a "buzzing" glottal pulse. The machine learning models were trained on a 30 min speech of Gen. David Petraeus speaking to Congress, so it's funny when German is trying to be spoken through it.

    We'll try to get you "hooked up" with the program. It requires a good CPU. It requires a few steps to install, including Anaconda or Miniconda (A Python environment), and may take about 5-10 GB of disk space - not because my program is particularly large, it's just that Python and end-users don't mix 😞

  10. For random number sequences, the frequency means the relative differences between neighboring numbers. The sequence of numbers 3, 95, 26, 62 is bouncing around (high frequency). The seq. 25, 32, 28, 27 is less jumpy (low frequency).

    Now imagine pixel neighborhoods. Blurring makes all the pixels almost the same in a neighborhood. But to get finer features, you need a balance: a little bit of blurring at each length scale: neigborhood, village, city, county, state, etc.

  11. 5 hours ago, Karyn said:

    Interesting pics Michael.  I would love to see the two bottom ones upside down.

    The wants plan took a hit this week. I would buy myself a couple of misting machines and convert the camera to a tripod or some other stable stand. The plan remains a plan now, seriously taking a hi.  It stays put back another couple of weeks/month.

    Inka ~ pussy cat, which I have now held more than I had the opportunity to keep my Em, had a super expensive operation day before yesterday. So once again, my playthings slide down the wants pile. The Needs pile has grown.

    Karyn, the beauty of this new technique is it doesn't require mist, smoke, or mixing cleaning solutions 😲 (chemistry joke). We all have a webcam and a computer. I also envision a cellphone app. The software converts the digital noise and the light particles in a room into "smoke." It is like we can now see the invisible air itself.

  12. 5 hours ago, Andres Ramos said:

    Interesting. So the idea is to provide the outline of a face as a base for, lets say information precipitation, that helps the spirits to do their manipulations at the right points but in the end forming another face that is not yours?

    In that case wouldn't it be favorable to use a somehow "neutral" face like a smiley?

    Spiricam is a general approach. It works just fine with a blank wall or in a closed box. Pointing at someone is a variation that might reveal something about their aura. To be on the safe side, one could put a sheet over their face. 👻

  13. Building on my work with Perloids derived from the noise of software-defined radios, I decided to explore looking at the noise coming from cameras. In principle, camera provide high-bandwidth information and spirits could potentially either interact directly with individual pixel-level sensor elements or with the light in the environment.

    The two "tricks" then for getting things to work is to derive as much noise as possible from the camera, and then apply our favorite transformation that either blurs or "smokifies" the noise.

    How do you get lots of noise from a camera? Well, one way would be turn up the gain and exposure time, and put the camera in complete darkness. Another more universal method is to simply subtract the current frame from the previous frame. This allows us to point the camera at virtually anything (or nothing at all) and get a noisy pattern.

    Here's an example from a webcam. I've mathematically amplified the noise so you can see it. 

    image.thumb.png.695c53272ebc16d0257c12f7407111f6.png

     

    Now the next trick is to somehow make heads or tails of the noise. Blurring is one possibility, which could be done with something called Gaussian kernel convolution in image manipulation programs like GIMP and Photoshop. However, even better than that is "smokification." That's a word I just made up, but it's like how Perlin noise is created from regular white noise. White noise is fairly featureless and looks like indistinguishable sand, but Perlin noise often produces cloudy / smoky-like textures like the ones you see in Keith's Perlin image experiments.

    Now, I've been trying out different variations on Perlin, like my so-called "Perloids" but the general range of useful ones seem to be between "inverse frequency" and "inverse squared frequency" What this means in laymans terms is that we try to transform noise to look as real as possible using essentially a two dimension "graphic equalizer" (you know, the audio version for controlling high, mid, and low frequencies?)

    Now imagine amplifying the "bass frequencies" (or the largest features of an image) and keeping the mid-levels in the middle, and dampening out  the "treble/high frequencies" (the smallest features). This is the essence of Perlinizing / smokifying noise - and the results are amazing as we've seen with Perlin noise images, and continue to be with my webcam/noise input.

    Incidentally, the original Varanormal France Perlin program uses random numbers generated with a deterministic algorithm, although a non-deterministic seed. How spirits figured out how send messages through that channel puzzles me to this day, but it seems to work. Here I'm using physically-generated noise (from a webcam), so at least it seems plausible that spirits could influence the electronics or the light in the air to send us information.

    And now for some pictures. Caveat emptor: I can't prove that anything we see here is really from spirit, and not just active imagination. But I'm hoping over time, the proof will either get clearer, or we'll have too many instances to dismiss.

    The first two are from a camera I got in the mail today, and they're a bit disappointing. But what you might see in black and white are four faces followed by one face. The next four small ones are faces. The final one on the 2nd row appears to have at least four faces. 

    example1.png.thumb.jpg.78c0b512f16ee115904a9111aa94363f.jpg

    example2.thumb.jpg.a84b10f54bf1761ddebfb5b0adb4463e.jpg

    The last row is a little different. The first one is a whole body image. The second looks like a humanoid-alien face. The final one is an example of when I point the camera at myself and hold real still. Maybe we'll start being able to view the personalities of our soul?

    One very last comment for this post. There's no machine learning tricks in the method (yet?!?), just "simple" math.

     

     

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.