Jump to content

Michael Lee

Team
  • Posts

    267
  • Joined

  • Last visited

  • Days Won

    61

Everything posted by Michael Lee

  1. I see three faces (at least) on the right side. They are subtle but realistic looking and similar to what I find. There's also possibly 2 or 3 on the left side.
  2. Interesting just looking at the table of contents of the top winner's essay. It reminds me of Victor Zammit's work, basically, a "Best-Of" compilation of afterlife evidence.
  3. Welcome! In my opinion, ITC, for now, walks a fine line between real and imagined because the signals that we are trying to discern are often very weak. Feel free to share with us any of your results and ideas on the forum.
  4. Having broken free of audio ITC (for now), I've tried some visual ITC and developed Spiricam https://www.varanormal.com/forums/topic/1121-spiricam-combining-camera-input-and-a-touch-of-image-processing/?do=getNewComment. But if you really want to start quantifying whether spirit is having an effect on ITC, digital ITC is both the most rigorous and most difficult to figure out. I've tried a few strategies over the years, but I've converged on the idea of expecting a certain rate of symbols, such as 1 symbol per second, and using some reasonably random (but physical) noise source - for example, the line-in input of a laptop. In a previous post, I mentioned a parity test, where we are receiving 0s and 1s, but the number of 1s should be even per 8-bits/1 byte. This would be a "parity" or error check mechanism. Of course, a more complicated scheme could be used, but we'll start with this. Now as far as encoding characters, I'm opting at first to try ASCII, which is a standard 7-bit set (0-127) used in the US, but has most of the common symbols of English, the 10 digits and some punctuation and math symbols. How to convert physical noise into bits. There's many strategies out there. I want one that achieves some amount of randomness without forcing it and losing information that might be coming through. If you listen to the line-in audio, with nothing connected, it predictably sounds like noise + ground hum. So, I've decided to apply spectral subtraction, which should remove any periodic tone interference and also a layer of physical noise. Then, if I'm collecting one bit per 125 ms (8 bits per second), I sum up the processed audio samples for each bit, if it's above zero, I call the bit a "1" otherwise the bit is a "0". Here's a snippet of results, so far. I just started running it an hour ago: ,$,b,..,Z,,,,.,,,,,H 0.511 ,,,.,Du2,,,Zm,Mw&&,, 0.513 ==,,J.,,>,,1|,U(gT,, 0.515 #.,,],\,bf..,444,,., 0.518 ,`C+,,.,2V,.^,S,,,,, 0.515 ,,,,#,j,,,oV.D,,,,,M 0.511 .VT, N)lZn,1, ,,,,,, 0.512 ,.,Q,.,JP-.y.n, ,,., 0.516 ,9,,,eR,..,,,V,.,,,, 0.510 ,.,,.-,u@|u,;{.O;,j, 0.515 ,g ,m,k,15v,,Og,|,Iu 0.517 #,.{,j,.,Y,,.,{!,{m. 0.520 .],,,=.N,,`,fB,,,.,, 0.518 8,,),.,.,,.,,.,,,;.- 0.516 i.8.,,/..oCT.,,,G,,, 0.518 ,=/,.,,to,,,,>.,Z(,Y 0.518 The right number column gives a running tally of fraction of bytes that have even parity. The symbol "." is used to represent the first 32 symbols, which are not displayable. The symbol "," is used to represent bytes that have odd parity, and therefore, we want to exclude. And here's the Python code: # -*- coding: utf-8 -*- """ @author: Michael S. Lee (10/30/2021) """ import numpy as np import pyaudio import time import keyboard from scipy.signal import stft, istft # Convert byte value to character def convert(byte): if ((byte < 32) or (byte == 127)): char = '.' # invisible character else: char = chr(byte) return(char) def callback(data, frame_count, time_info, status): global index, bias, line, parity # Extract frame of audio from pyaudio buffer frame0 = np.copy(np.frombuffer(data,dtype='float32')) # Normalize audio frame = 0.05 * (frame0-frame0.mean()) / frame0.std() # Simple spectral subtraction to remove interference and noise f,t,F = stft(frame, nperseg = 256) G = np.abs(F) G1 = np.copy(G) P = F / (G + eps) mean = G.mean(axis=-1) for ii in range(G1.shape[1]): G1[:,ii] = np.maximum(G1[:,ii] - mean*spec,0.0) t, frame1 = istft(P*G1) # Normalize audio frame1 = 0.05 * (frame1-frame1.mean()) / frame1.std() bit = np.zeros((8),dtype='uint8') for i in range(nbits): block = frame1[i*nblock:(i+1)*nblock] bit[i] = (block.sum() > 0) * 1 # Save character only if parity is even if (np.mod(bit.sum(),2)==0): byte = bit[0] + bit[1] * 2 + bit[2] * 4 + \ bit[3] * 8 + bit[4] * 16 + bit[5] * 32 + \ bit[6] * 64 char = convert(byte) parity = parity + 1 else: char = ',' # parity was odd #bias = lambda1 * bias + (1.0 - lambda1) * (frame1.sum() / 8.0) #std = frame1.std() # print (bit, bit.sum(), "%5.2E" % bias, "%5.2E" % std) # Print and add next character print (char, end="") line = line + char index = index + 1 # Save line to text file if (np.mod(index, nchar) == 0): frac = '%4.3f' % (parity / index) save_file.write(line+" "+frac+"\n") print (" "+frac) line = "" # Play sound of spectrally subtracted audio # can be multiplied by zero to be quiet data2 = np.ndarray.tobytes(frame1 * 0.0) return (data2,pyaudio.paContinue) # Sampling rate fs = 16000 # Audio block size, could be a second, # could be shorter/longer. nframe = fs print (nframe,'samples per frame') print ("------------------------") # Index of all frames index = 0 # #bits per frame nbits = 8 # Bias - starting value bias = 0.0 # lambda1 - update rate of bias lambda1 = 0.9 # strength of spectral subtraction spec = 1.75 # block size - needs to be integer nblock = nframe // nbits # #chars per line nchar = 20 # small number eps = 1e-8 # Accumulating line of characters line = "" # Keep track of parity fraction parity = 0 # Audio device index microphone = 3 output = -1 # Give file a unique name recordfilename = time.strftime("%Y%m%d-%H%M%S")+'.txt' # Text file to store stream save_file = open(recordfilename,'w') # Start audio stream p = pyaudio.PyAudio() stream = p.open(format=pyaudio.paFloat32, channels=1, # mono rate=fs, output=True, # We're just generating text input=True, # Audio input source input_device_index=microphone, output_device_index=output, frames_per_buffer=nframe, # 1-second stream_callback=callback) # Wait for key press while True: if keyboard.read_key() == "q": print("\nUser break.") break time.sleep(1) # Close stream and text file stream.close() save_file.close()
  5. I've posted the latest version in the Downloads section.
  6. Version 0.74

    34 downloads

    Unzip file. The main program is the EXE. There's also a README.txt to look at and a config.ini. Everything else is support libraries. In addition, check out the tutorial: Spiricam Demo. I have a new version coming out soon (11/8/21). Look for Spiricam 0.8. It will have: 1) Bug fix: improved color 2) Options in the config.ini to specify camera resolution, display resolution, and workspace resolution 3) A new lighting contrast option (Gamma) 4) Arrow keys to control frame slider 5) Auto-stop, if camera selection changes during recording. 6) A new subtle option for filtering.
  7. Here's a video demo/tutorial for the Spiricam software (v.0.73):
  8. Fernando, I'm working on a new version. I should have something for you by this weekend. Fewer (newer) bugs and some more features.
  9. Kevin thanks for stepping up to the plate. If the program actually runs, I think you will enjoy it, and so will your team.
  10. I've put a graphical interface onto my Spiricam program and managed to package it with pyinstaller, so it can run as an executable (amidst a ~1GB sea of support files). If anybody has a Windows PC with a webcam (or one they can attach), and wants to experiment with visual ITC, let me know. Here's a screenshot of the program:
  11. The program has a few machine learning models to reverse things like additive noise, and quantization artifacts. Also, it has a primitive "speech model" that adds formants it finds to a "buzzing" glottal pulse. The machine learning models were trained on a 30 min speech of Gen. David Petraeus speaking to Congress, so it's funny when German is trying to be spoken through it. We'll try to get you "hooked up" with the program. It requires a good CPU. It requires a few steps to install, including Anaconda or Miniconda (A Python environment), and may take about 5-10 GB of disk space - not because my program is particularly large, it's just that Python and end-users don't mix
  12. For random number sequences, the frequency means the relative differences between neighboring numbers. The sequence of numbers 3, 95, 26, 62 is bouncing around (high frequency). The seq. 25, 32, 28, 27 is less jumpy (low frequency). Now imagine pixel neighborhoods. Blurring makes all the pixels almost the same in a neighborhood. But to get finer features, you need a balance: a little bit of blurring at each length scale: neigborhood, village, city, county, state, etc.
  13. Karyn, the beauty of this new technique is it doesn't require mist, smoke, or mixing cleaning solutions (chemistry joke). We all have a webcam and a computer. I also envision a cellphone app. The software converts the digital noise and the light particles in a room into "smoke." It is like we can now see the invisible air itself.
  14. Spiricam is a general approach. It works just fine with a blank wall or in a closed box. Pointing at someone is a variation that might reveal something about their aura. To be on the safe side, one could put a sheet over their face.
  15. In the next pictures, the first three are just me with the distortion of the method, the camera pointed at me. The next one looks a little like me. #5, #6, and #7, however, showed up over my face and don't look anything like me. The final one is just one I found elsewhere in the smoky pictures.
  16. Building on my work with Perloids derived from the noise of software-defined radios, I decided to explore looking at the noise coming from cameras. In principle, camera provide high-bandwidth information and spirits could potentially either interact directly with individual pixel-level sensor elements or with the light in the environment. The two "tricks" then for getting things to work is to derive as much noise as possible from the camera, and then apply our favorite transformation that either blurs or "smokifies" the noise. How do you get lots of noise from a camera? Well, one way would be turn up the gain and exposure time, and put the camera in complete darkness. Another more universal method is to simply subtract the current frame from the previous frame. This allows us to point the camera at virtually anything (or nothing at all) and get a noisy pattern. Here's an example from a webcam. I've mathematically amplified the noise so you can see it. Now the next trick is to somehow make heads or tails of the noise. Blurring is one possibility, which could be done with something called Gaussian kernel convolution in image manipulation programs like GIMP and Photoshop. However, even better than that is "smokification." That's a word I just made up, but it's like how Perlin noise is created from regular white noise. White noise is fairly featureless and looks like indistinguishable sand, but Perlin noise often produces cloudy / smoky-like textures like the ones you see in Keith's Perlin image experiments. Now, I've been trying out different variations on Perlin, like my so-called "Perloids" but the general range of useful ones seem to be between "inverse frequency" and "inverse squared frequency" What this means in laymans terms is that we try to transform noise to look as real as possible using essentially a two dimension "graphic equalizer" (you know, the audio version for controlling high, mid, and low frequencies?) Now imagine amplifying the "bass frequencies" (or the largest features of an image) and keeping the mid-levels in the middle, and dampening out the "treble/high frequencies" (the smallest features). This is the essence of Perlinizing / smokifying noise - and the results are amazing as we've seen with Perlin noise images, and continue to be with my webcam/noise input. Incidentally, the original Varanormal France Perlin program uses random numbers generated with a deterministic algorithm, although a non-deterministic seed. How spirits figured out how send messages through that channel puzzles me to this day, but it seems to work. Here I'm using physically-generated noise (from a webcam), so at least it seems plausible that spirits could influence the electronics or the light in the air to send us information. And now for some pictures. Caveat emptor: I can't prove that anything we see here is really from spirit, and not just active imagination. But I'm hoping over time, the proof will either get clearer, or we'll have too many instances to dismiss. The first two are from a camera I got in the mail today, and they're a bit disappointing. But what you might see in black and white are four faces followed by one face. The next four small ones are faces. The final one on the 2nd row appears to have at least four faces. The last row is a little different. The first one is a whole body image. The second looks like a humanoid-alien face. The final one is an example of when I point the camera at myself and hold real still. Maybe we'll start being able to view the personalities of our soul? One very last comment for this post. There's no machine learning tricks in the method (yet?!?), just "simple" math.
  17. Today's "2 minute" video. Here's what I found:
  18. Two more collections (too much fun). The top row (middle two, just look like objects)
  19. Kevin - At some point, I'll explain in a post, why there are so many hangups to having more of my audio ITC available. Its a tractable problem, but its going to take a little effort. For now, I've temporarily caught the Visual ITC bug. I hope it can help us further understand how spirit interacts with our devices, and how we can make their signal stronger.
  20. Today's pickup from about 240 images More than stamp collecting, I'm using the picture quality and number of faces as metrics to figure out what are the best settings for getting information from the SDR.
  21. Today's pickup from 300 images, 256x192. The bottom left is a duck. The bottom middle is a mouse?
  22. Some random images I picked out from the video: A man (Tesla?) makes an appearance at 4:54. Another man appears at the top of the screen at 6:56. Also, here's two people at the top left corner of 15:07. One on the right, wearing a hat.
  23. Kevin: Thanks! I spent all weekend on video ITC. Here's another video. No mirror. Different method, but similar effect. SDR source. Seems like blending multiple frames of noise helps. Yes, you can find faces, but you will also notice these complex 3-D objects / scenes in there, too. I feel like we're looking at the quantum foam .
  24. I put mirroring in this video, because I thought it would make it easier for spirits to form faces. But, it ends producing like you said, designs rather than faces.
  25. Here I'm using my SDR to generate noise, and then my Perloid processing on the noise in real-time. I also mirror the image at the Y-axis, for fun
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.