Announcement

Collapse
No announcement yet.

Video Color Changes - Shimmering

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Video Color Changes - Shimmering

    When I look at a camera display very closely on my PC, I can see random shifting of colors, where the same pixel seems to have colors that do not stay constant from frame-to-frame even though the subject matter is not moving. It's small, subtle color shifts,

    But once I convert the video to a binary color pair (code below) the color shifting is very noticeable.

    I've looked at multiple camera displays and all have that same small-scale, color-shifting effect.

    I generally think that variances in illumination are at the heart of the effect. I've turned off the fan and the air conditioning, thinking those might be involved somehow (similar to landscapes shimmering in the distance on a hot day) but it doesn't seem to make a difference.

    Is anyone familiar with the effect and knows what causes it ?

    I don't need this color conversion to see the effect, but it does make it easier to see.

    Code:
    'Compilable Example:
    #Compile Exe
    #Dim All
    
    #Debug Error On
    #Debug Display On
    
    %Unicode = 1
    #Include "Win32API.inc"
    #Include Once "dshow.inc"
    #Include "qedit.inc"
    
    %ID_Timer    = 500
    %IDC_Graphic = 501
    
    Global hDlg, hDCA, hDCB As Dword
    Global wRes, hRes, bwTrigger, TextColor, BGColor  As Long
    Global pBuffer As String Ptr, pBufferSize As Long
    Global r(), g(), b() As Long
    Global qFreq, qStart, qStop As Quad
    
    Global pGraph          As IGraphBuilder           'Filter Graph Manager
    Global pBuild          As ICaptureGraphBuilder2   'Capture Graph Builder
    Global pSysDevEnum     As ICreateDevEnum          'enumeration object
    Global pEnumCat        As IEnumMoniker
    Global pMoniker        As IMoniker                'contains information about other objects
    Global pceltFetched    As Dword
    Global pCap            As IBaseFilter             'Video capture filter
    Global pControl        As IMediaControl
    Global pWindow         As IVideoWindow            'Display Window
    
    Function PBMain() As Long
       Dialog Default Font "Tahoma", 12, 1
       Dialog New Pixels, 0, "DirectShow SampleGrabber Test",300,300,1280,480, %WS_OverlappedWindow Or %WS_ClipSiblings Or %WS_ClipChildren To hDlg
       Dialog Show Modal hDlg Call DlgProc
    End Function
    
    CallBack Function DlgProc() As Long
       Local w,h As Long, PS As PaintStruct
       Select Case Cb.Msg
          Case %WM_InitDialog
             QueryPerformanceFrequency qFreq
             bwTrigger   = 128
             TextColor   = %Yellow
             BGColor     = %Blue
    
             Control Add Graphic, hDlg, %IDC_Graphic, "", 0,0,640,480, %SS_Notify
             Graphic Attach hDlg, %IDC_Graphic
             Graphic Get DC To hDCB
             SetTimer(hDlg, %ID_Timer, 50, ByVal(%Null))
    
             pGraph      = NewCom ClsId $CLSID_FilterGraph                              'filter graph
             pBuild      = NewCom ClsId $CLSID_CaptureGraphBuilder2                     'capture graph builder
             pSysDevEnum = NewCom ClsId $CLSID_SystemDeviceEnum                         'enumeration object
             DisplayFirstCamera
    
          Case %WM_Command
             Select Case Cb.Ctl
                Case %IDC_Graphic : ChangeColor
             End Select
    
          Case %WM_Size
             Dialog Get Client hDlg To w,h
             Control Set Loc hDlg, %IDC_Graphic, w/2,0
             Control Set Size hDlg, %IDC_Graphic, w/2,h
             pWindow.SetWindowPosition(0,0,w/2,h)
             Graphic Set Size w, h                 'video and memory bitmap kept the same size
    
          Case %WM_Destroy
             pGraph = Nothing
             pBuild = Nothing
             pSysDevEnum = Nothing
    
          Case %WM_Timer
             QueryPerformanceCounter   qStart
    
             Dialog Get Client hDlg To w,h
             hDCA = GetDC(hDlg)
             BitBlt hDCB, 0,0,w,h, hDCA, 0,0, %SrcCopy 'copy using bitblt dialog hDC to memory Bitmap DC
             ConvertToBinaryColors_Beene                         'modify content of memory Bitmap
             Graphic ReDraw
    
             QueryPerformanceCounter   qStop
             Dialog Set Text hDlg, Format$((qStop-qStart)/qFreq,"###.000") & " seconds"
    
    
       End Select
    End Function
    
    Sub ChangeColor
       If TextColor = %Yellow And BGColor = %Blue Then
          TextColor = %White  : BGColor = %Blue
       ElseIf TextColor = %White  And BGColor = %Blue Then
          TextColor = %White  : BGColor = %Black
       ElseIf TextColor = %White  And BGColor = %Black Then
          TextColor = %Black  : BGColor = %White
       ElseIf TextColor = %Black And BGColor = %White Then
          TextColor = RGB(150,150,150) : BGColor = %White
       Else
          TextColor = %Yellow : BGColor = %Blue
       End If
    End Sub
    
    
    Sub DisplayFirstCamera
    
       If pSysDevEnum.CreateClassEnumerator($CLSID_VideoInputDeviceCategory, pEnumCat, 0) <> %S_Ok Then Exit Sub
       pEnumCat.next(1, pMoniker, pceltFetched)                               'cycle through monikders
       pMoniker.BindToObject(Nothing, Nothing, $IID_IBaseFilter, pCap)       'create device filter for the chosen device
       pGraph.AddFilter(pCap,"First Camera")                                 'add chosen device filter to the filter graph
       pBuild.SetFilterGraph(pGraph)                                         'initialize pBuild
       pBuild.RenderStream $Pin_Category_Preview, $MediaType_Video, pCap, Nothing, Nothing   'render the live source
       pWindow = pGraph
       pWindow.Owner = hDlg
       pWindow.WindowStyle = %WS_Child Or %WS_ClipChildren Or %WS_ClipSiblings  'video window settings
       pControl = pGraph
       pControl.Run
    End Sub
    
    Sub ConvertToBinaryColors_Beene
       Local w,h,i,iColor,R,G,B As Long, p As Long Ptr, bmp$
       Graphic Get Bits To bmp$
       'get width/height of image
       w = Cvl(bmp$,1)
       h = Cvl(bmp$,5)
       p = StrPtr(bmp$)+8    'position of starting position for bits in string
       'get string position of coordinates and modify the string at that position
       For i = 1 To w*h
          iColor = @p                           'result is a BGR color value 0-R-G-B
          B = iColor Mod 256                    'or this: iColor AND &HFF&
          G = (iColor\256) Mod 256              'or this: (iColor AND &HFF00&) \ &H100
          R = (iColor\256\256) Mod 256          'or this: (iColor AND &HFF0000&) \ &H10000&
          iColor = 0.299*R + 0.587*G + 0.114*B  'or this: iColor = (R+G+B)/3
          If iColor <= BWTrigger Then @p = Bgr(TextColor) Else @p = Bgr(BGColor)
          Incr p
       Next i
       Graphic Set Bits bmp$
       Graphic ReDraw
    End Sub

  • #2
    It sounds like the usual noise you get on any image.
    No electrical signal is exactly constant and camera images can suffer from lots of types of noise including thermal noise, quantisation noise and shot noise.

    Because the sensor is warm (300K) electrons will be displaced occasionally by heat instead of by photons. These add small random signals to the underlying ideal signal.
    You may have seen large astronomical telescopes with liquid nitrogen cooling of the sensor to reduce this thermal noise.


    When the image is digitised the measured image will often be on the border between 2 quantised levels and sometimes it will measure 1 bit high and sometimes 1 bit low.

    Then there's the count of photons hitting the sensor and how efficiently the sensor converts them to electrons.
    You may get 2000 photons hitting each pixel in the sensor each frame. They might generate 1500 electrons.
    But in practice the number of photons is not exactly 2000 per frame, it varies around an average of 2000 and the sensor efficiency isn't exactlty 75% so even if there are exactly 2000 photons you won't get exactly 1500 electrons each time.

    Comment


    • #3
      Hi Paul!

      Well, that's a heap of information on the topic. It sounds like a topic you're very familiar with. Thanks for the response!

      It sounds like there's no fixing the incoming signal, so I may have to add "filter" code of some kind to the conversion code. Any suggestion on what type of filter would apply in this situation?

      I don't know if a typical noise filter for content with many colors would be adequate for binary content? I think of a noise filter as something that "smoothes" out the result. But in binary color content, "smoothing" may not adequately describe the effect.

      The good news is that with some of the fast conversion code I've seen, including yours from 2017, there is some room to add extra filter code!

      Comment


      • #4
        Gary,
        it's called"coring" and it's crude but simple.
        If the value doesn't change enough then don't display the change.

        I've slightly modified your ConvertToBinaryColors_Beene function so it remembers what the previous image was and only updates the display if it changes by more than the expected noise level, in this case 10.
        You'll need to play around with the values.

        Code:
        PBWin10 program
        'Compilable Example:
        #COMPILE EXE
        #DIM ALL
        
        #DEBUG ERROR ON
        #DEBUG DISPLAY ON
        
        %Unicode = 1
        #INCLUDE "Win32API.inc"
        #INCLUDE ONCE "dshow.inc"
        #INCLUDE "qedit.inc"
        
        %ID_Timer    = 500
        %IDC_Graphic = 501
        
        GLOBAL hDlg, hDCA, hDCB AS DWORD
        GLOBAL wRes, hRes, bwTrigger, TextColor, BGColor  AS LONG
        GLOBAL pBuffer AS STRING PTR, pBufferSize AS LONG
        GLOBAL r(), g(), b() AS LONG
        GLOBAL qFreq, qStart, qStop AS QUAD
        
        GLOBAL pGraph          AS IGraphBuilder           'Filter Graph Manager
        GLOBAL pBuild          AS ICaptureGraphBuilder2   'Capture Graph Builder
        GLOBAL pSysDevEnum     AS ICreateDevEnum          'enumeration object
        GLOBAL pEnumCat        AS IEnumMoniker
        GLOBAL pMoniker        AS IMoniker                'contains information about other objects
        GLOBAL pceltFetched    AS DWORD
        GLOBAL pCap            AS IBaseFilter             'Video capture filter
        GLOBAL pControl        AS IMediaControl
        GLOBAL pWindow         AS IVideoWindow            'Display Window
        
        FUNCTION PBMAIN() AS LONG
           DIALOG DEFAULT FONT "Tahoma", 12, 1
           DIALOG NEW PIXELS, 0, "DirectShow SampleGrabber Test",300,300,1280,480, %WS_OVERLAPPEDWINDOW OR %WS_CLIPSIBLINGS OR %WS_CLIPCHILDREN TO hDlg
           DIALOG SHOW MODAL hDlg CALL DlgProc
        END FUNCTION
        
        CALLBACK FUNCTION DlgProc() AS LONG
           LOCAL w,h AS LONG, PS AS PaintStruct
           SELECT CASE CB.MSG
              CASE %WM_INITDIALOG
                 QueryPerformanceFrequency qFreq
                 bwTrigger   = 128
                 TextColor   = %YELLOW
                 BGColor     = %BLUE
        
                 CONTROL ADD GRAPHIC, hDlg, %IDC_Graphic, "", 0,0,640,480, %SS_NOTIFY
                 GRAPHIC ATTACH hDlg, %IDC_Graphic
                 GRAPHIC GET DC TO hDCB
                 SetTimer(hDlg, %ID_Timer, 50, BYVAL(%Null))
        
                 pGraph      = NEWCOM CLSID $CLSID_FilterGraph                              'filter graph
                 pBuild      = NEWCOM CLSID $CLSID_CaptureGraphBuilder2                     'capture graph builder
                 pSysDevEnum = NEWCOM CLSID $CLSID_SystemDeviceEnum                         'enumeration object
                 DisplayFirstCamera
        
              CASE %WM_COMMAND
                 SELECT CASE CB.CTL
                    CASE %IDC_Graphic : ChangeColor
                 END SELECT
        
              CASE %WM_SIZE
                 DIALOG GET CLIENT hDlg TO w,h
                 CONTROL SET LOC hDlg, %IDC_Graphic, w/2,0
                 CONTROL SET SIZE hDlg, %IDC_Graphic, w/2,h
                 pWindow.SetWindowPosition(0,0,w/2,h)
                 GRAPHIC SET SIZE w, h                 'video and memory bitmap kept the same size
        
              CASE %WM_DESTROY
                 pGraph = NOTHING
                 pBuild = NOTHING
                 pSysDevEnum = NOTHING
        
              CASE %WM_TIMER
                 QueryPerformanceCounter   qStart
        
                 DIALOG GET CLIENT hDlg TO w,h
                 hDCA = GetDC(hDlg)
                 BitBlt hDCB, 0,0,w,h, hDCA, 0,0, %SrcCopy 'copy using bitblt dialog hDC to memory Bitmap DC
                 ConvertToBinaryColors_Beene                         'modify content of memory Bitmap
                 GRAPHIC REDRAW
        
                 QueryPerformanceCounter   qStop
                 DIALOG SET TEXT hDlg, FORMAT$((qStop-qStart)/qFreq,"###.000") & " seconds"
        
        
           END SELECT
        END FUNCTION
        
        SUB ChangeColor
           IF TextColor = %YELLOW AND BGColor = %BLUE THEN
              TextColor = %WHITE  : BGColor = %BLUE
           ELSEIF TextColor = %WHITE  AND BGColor = %BLUE THEN
              TextColor = %WHITE  : BGColor = %BLACK
           ELSEIF TextColor = %WHITE  AND BGColor = %BLACK THEN
              TextColor = %BLACK  : BGColor = %WHITE
           ELSEIF TextColor = %BLACK AND BGColor = %WHITE THEN
              TextColor = RGB(150,150,150) : BGColor = %WHITE
           ELSE
              TextColor = %YELLOW : BGColor = %BLUE
           END IF
        END SUB
        
        
        SUB DisplayFirstCamera
        
           IF pSysDevEnum.CreateClassEnumerator($CLSID_VideoInputDeviceCategory, pEnumCat, 0) <> %S_OK THEN EXIT SUB
           pEnumCat.next(1, pMoniker, pceltFetched)                               'cycle through monikders
           pMoniker.BindToObject(NOTHING, NOTHING, $IID_IBaseFilter, pCap)       'create device filter for the chosen device
           pGraph.AddFilter(pCap,"First Camera")                                 'add chosen device filter to the filter graph
           pBuild.SetFilterGraph(pGraph)                                         'initialize pBuild
           pBuild.RenderStream $Pin_Category_Preview, $MediaType_Video, pCap, NOTHING, NOTHING   'render the live source
           pWindow = pGraph
           pWindow.Owner = hDlg
           pWindow.WindowStyle = %WS_CHILD OR %WS_CLIPCHILDREN OR %WS_CLIPSIBLINGS  'video window settings
           pControl = pGraph
           pControl.Run
        END SUB
        
        SUB ConvertToBinaryColors_Beene
           LOCAL w,h,i,iColor,R,G,B AS LONG, p AS LONG PTR, bmp$
        
           STATIC PreviousValues() AS LONG
           LOCAL NoiseThreshold AS LONG
        
           NoiseThreshold = 10
        
        
        
           GRAPHIC GET BITS TO bmp$
           'get width/height of image
           w = CVL(bmp$,1)
           h = CVL(bmp$,5)
           p = STRPTR(bmp$)+8    'position of starting position for bits in string
        
        
           REDIM PRESERVE PreviousValues(1 TO w*h)  'should really only dim this once but I'm being lazy
        
        
        
           'get string position of coordinates and modify the string at that position
           FOR i = 1 TO w*h
              iColor = @p                           'result is a BGR color value 0-R-G-B
              B = iColor MOD 256                    'or this: iColor AND &HFF&
              G = (iColor\256) MOD 256              'or this: (iColor AND &HFF00&) \ &H100
              R = (iColor\256\256) MOD 256          'or this: (iColor AND &HFF0000&) \ &H10000&
              iColor = 0.299*R + 0.587*G + 0.114*B  'or this: iColor = (R+G+B)/3
        
        
              IF ABS(iColor - PreviousValues(i)) > NoiseThreshold THEN
                  'the change is bigger than the expected noise so display the change
        
                    IF iColor <= BWTrigger THEN @p = BGR(TextColor) ELSE @p = BGR(BGColor)
        
                     'update the previous values
                    PreviousValues(i) = iColor
        
              ELSE
                  'the change is not enough so keep the original value
                  IF PreviousValues(i) <= BWTrigger THEN @p = BGR(TextColor) ELSE @p = BGR(BGColor)
        
              END IF
        
        
              INCR p
        
           NEXT i
           GRAPHIC SET BITS bmp$
           GRAPHIC REDRAW
        END SUB

        Comment


        • #5
          Paul!

          Have to walk out for an hour or so - family situation. Thanks for the post and I'll give it a try when I get back!

          Comment


          • #6
            Paul,
            There's no doubt that it reduces the shimmering to some degree. It seems to work better when the camera is looking at a landscape.

            The most useful situation for me is when viewing text - such as a newspaper. Having the newspaper background all white and the text all black - with no shimmering at the edge of the text is the goal - a nice, sharp dividing line is what I'd hope to see.

            That may mean that different algorithms need to be applied, depending on the expected target.

            I'll definitely play with your code and try out different values.



            Comment


            • #7
              Howdy, Paul!

              Larger numbers definitely provide better noise (shimmering) reduction. Smaller numbers respond faster to changes in the target, whereas larger numbers slow down the response to moving the target in the camera field of view.

              There are edge artifacts with the approach.

              For example, a vertical line can be "notched", as though a bite has been taken out of the side. Its a very prevalent artifact.

              Click image for larger version  Name:	pb_2140.jpg Views:	0 Size:	1.4 KB ID:	784042Click image for larger version  Name:	pb_2141.jpg Views:	0 Size:	4.9 KB ID:	784043

              I don't see that changing the NoiseThreshold has much difference on the artifacts.

              By email, I had a friend make this suggestion about types of filters he thought might be helpful ...

              ...try erosion and dilation (aka closing and opening) or counting.
              I'm not sure what those are but I will take a look. He had a few links for me to follow with more information.

              Still working with it ... will let you know what else I find out.


              Comment


              • #8
                Originally posted by Gary Beene View Post

                >> try erosion and dilation (aka closing and opening) or counting.

                I'm not sure what those are but I will take a look. He had a few links for me to follow with more information.

                Still working with it ... will let you know what else I find out.

                Where the only changes to the frame are those imposed by these effects, they may be reduced by averaging each pixel over time.
                Even at ten frames per second, the averaging improves the image enough in less than one second to provide a more stable image.
                You have to be able to determine if it is a static image then.
                For dilation, you selectively blur and average over a short distance where you detect sharp delineations.
                There are a lot of articles on these processes.
                Processing speed is going to become an issue here.
                The world is strange and wonderful.*
                I reserve the right to be horrifically wrong.
                Please maintain a safe following distance.
                *wonderful sold separately.

                Comment


                • #9
                  Howdy, Kurt!

                  Thanks for the various responses.

                  The underlying goal in these various threads of late is that I want to take an incoming video/image (top image) and convert it to a pair of high contrast colors (bottom image) for low vision users. Text is the primary video content of interest.

                  I've been quite happy that the direct show code gets the video onscreen and happy that the conversion to binary colors code can be done to support full frame rates.

                  But, the phrase "The devil is in the details!" seems to fit the current situation. The incoming video noise introduces the need to figure out how to use filters to improve the quality of the binary color image and as you say, speed will likely take a hit.

                  Oh well, one hill at a time!

                  I could point out that OCR would be something to consider, except that the incoming video is not simply text. I don't know if OCR technology can handle both images and text? Problems there, I expect.

                  Click image for larger version

Name:	pb_2145.jpg
Views:	124
Size:	14.3 KB
ID:	784062

                  Comment


                  • #10
                    Paul,

                    I'm putting together a test bed that will enable testing of the various binary color conversions codes that have been discussed, as well as being able to apply 3x3 convolution filters.

                    My first thought was that I can apply multiple, sequential conversions/filters by letting each one modify bmp$, then pass bmp$ on to the next filter for modification.

                    But, in a case like your core filter code, where you have integrated the code for binary conversion and coring, trying to split them up might slow down the overall process.

                    I'm not entirely sure that is true. The PreviousValues() could be made available from a copy of the original bmp$, such that breaking out the coring steps might not have as much of an effect as I might expect.

                    I'll need to try it out to see if I have that right.

                    Comment


                    • #11
                      Any filter reduces the real data in the image so you can't just apply loads of arbitrary filters and expect to have much of the original image left.

                      For example, a vertical line can be "notched"
                      That can't be solved with just 2 colours. Each pixel must become either foreground or background and a pixel in between is then going to stand out if it's on the wrong side of the line.
                      It's the same as jaggies you get on text and the way to fix that is to use more colours and blend the image between the 2 colours instead of switching from one to the other, just like the anti-aliasing done on text on your computer screen.

                      You can also improve the appearance of them by using a higher resolution image, then the errant pixel becomes smaller and isn't so noticeable.
                      If you're magnifying images so the partially sighted end user is able to see them then maybe the end user won't see the artifacts you do with your good eyes so maybe the problem isn't as bad as you think.

                      Comment


                      • #12
                        Gary,
                        all these 3x3 and 5x5 pixel filters are only going adjust the image over very small areas.

                        Probably a more worthwhile type of filter would be one which looks at much wider areas on the page you're trying to read to compensate for variations in brightness across the page.
                        When I hold a page of text up to my laptop webcam I can only see the text at the top because the lower half of the page is less well lit and both text and background fall below the brightness threshold.

                        It's also a problem when I change the lighting conditions to help, the camera then automatically adjusts and compensates for my changes so things don't improve as they should!


                        Have you looked at adjusting the camera properties?
                        Maybe if you can tell the camera to increase the contrast and fix the brightness then it will do half of the work for you?

                        Comment


                        • #13
                          Good morning, Paul!

                          You've brought up some excellent points.

                          <<user won't see the artifacts ...
                          Yes, I've had that thought before. I've even talked to low-vision users about it and I've been told that the artifacts are visible. However, a passing question is not the same as a more detailed study with multiple users. And, their response that they can see it doesn't mean it's as much a reading hurdle to them as it might seem to me. I'll spend some more time with some folks to talk more about the issue.

                          <<use more colors
                          There are competing requirements here. The binary color mode is used to get a high contrast between the FG and BG content. I've never seen any technical/medical studies that discuss the pros and cons of anti-aliasingI for low-vision users.

                          I've already contacted some folks I know with low vision issues and plan to discuss this topic with them. I have a local low-vision eye doctor with whom I'll talk as well.

                          <<camera properties
                          I've had the thought but no time to look at it just yet. I do plan to contact a camera provider that I know and discuss the topic with them as well.

                          <<lighting
                          Yes, I've found that bright light is way better - less of the shimmering. But it's just better, not a cure.

                          Also, my folks would typically lay the printed material flat so lighting is more even - but still some fading at the edges, particularly for newspapers or other items that are large in size.

                          <<tell the camera..
                          I wish! My DirectShow skills have not reached the point to where I can freely adjust the camera properties. I've had some success but just haven't had enough time to work it beyond the basic demo stage. Changing the camera properties may be quite important. When I talk to the camera provider, I'll ask questions along this line.



                          Comment


                          • #14
                            Gary,

                            It has some to do with the camera's hardware, some of the older lower res cameras tend to give you "sparkling" edges if you sharpen them at all where a high res camera does not do it. The shimmering comes from slight movement with the camera and it shows up if the camera does not produce high quality video. If the video is very poor, you can fix it a bit by smoothing the video but you lose detail doing that.
                            hutch at movsd dot com
                            The MASM Forum

                            www.masm32.com

                            Comment


                            • #15
                              Good morning, Steve, and thanks for the comments.

                              I have noticed that the shimmering is much worse when a picture is on-screen as compared to on-screen text. With text, the shimmering is only on the edges, whereas the picture shimmering is all over the picture.

                              That seems consistent with your comment about slight movement of the camera coming into play. Shimmering of the interior of text just shuffles a bunch of black pixels. Wheres, the entire surface of an image consists of varying colors - more susceptible, I think, to generating shimmering with only slight movements.

                              I've seen that cameras on an arm display some "wiggle" of the image, even when I think the camera is sitting solidly on a stable platform. The arm movement may be quite small, but still noticeable on a magnified HD display.

                              As soon as I get my convolution filter test bed working (today), I'll play with various filter effects to see what might help.

                              And as Paul asked earlier, there may be camera settings which help as well. I'll be contacting the camera manufacturer on that topic.

                              Or, perhaps I need to contact the NSA. They seem to be able to read license plates from hundreds of miles away and through a shimmering atmosphere. It seems that they have a solution in hand!

                              Comment


                              • #16
                                Originally posted by Gary Beene View Post

                                Or, perhaps I need to contact the NSA. They seem to be able to read license plates from hundreds of miles away and through a shimmering atmosphere. It seems that they have a solution in hand!
                                They use averaging over time, where the image is cumulatively processed, each frame borrowing from those previous.
                                https://www.ncbi.nlm.nih.gov/pubmed/2230302
                                The world is strange and wonderful.*
                                I reserve the right to be horrifically wrong.
                                Please maintain a safe following distance.
                                *wonderful sold separately.

                                Comment


                                • #17
                                  I just got off the phone with a tech guy from Elmo, the company that produces one of the cameras I use.

                                  He says that the primary cause of shimmering I'm seeing is because of the processing they do internal to the camera - translating the sensor output to a video stream. He seemed quite familiar with the camera workings and said that in the past, Elmo has provided an API which would have helped eliminate the shimmering. But now that they use standard UVC drivers, the custom drivers are no longer available.

                                  I see the shimmering with every camera I've tried - Logitech, Elmo, and Ipevo.

                                  I suspect there's more to know about this issue, but I thought I'd report the conversation.

                                  Comment

                                  Working...
                                  X