OpenCV Python - can I transform this simple code into a GUI application using Opencv methods and classes...











up vote
0
down vote

favorite












Bascially, I have the mouse_match opencv code that takes a directory path as an argument, loops over the images in the directory, then lets the user select -by mouse- a portion of an image, then performs template matching against the full image.
It has the following undesirable functionalities:
1. Takes input as command line arguments
2. Returns a black image with bright spots marking the points where the highest similarity was found.



Is it possible to create a GUI using opencv methods and classes so that it achieves:
1. the user should be able to do multiple selections, from one image or from other images in the directory also.
2. template matching for all objects in all images.
3. return the directory images with bounding boxes enclosing the occurrences of selected objects.
4. show some statistics about the occurrences, ex: each object against number of occurrences.



Any help will be appreciated :)
Even pesudocode-like instructions will be handy..



This is the code that I have in hand:



"""
mouse_and_match.py [-i path | --input path: default ./]
Demonstrate using a mouse to interact with an image:
Read in the images in a directory one by one
Allow the user to select parts of an image with a mouse
When they let go of the mouse, it correlates (using matchTemplate) that patch with the image.
ESC to exit
"""

import numpy as np
from math import *
import sys
import os
import glob
import argparse
import cv2 as cv

drag_start = None
sel = (0,0,0,0)

def onmouse(event, x, y, flags, param):
global drag_start, sel
if event == cv.EVENT_LBUTTONDOWN:
drag_start = x, y
sel = 0,0,0,0
elif event == cv.EVENT_LBUTTONUP:
if sel[2] > sel[0] and sel[3] > sel[1]:
patch = gray[sel[1]:sel[3],sel[0]:sel[2]]
result = cv.matchTemplate(gray,patch,cv.TM_CCOEFF_NORMED)
result = np.abs(result)**3
val, result = cv.threshold(result, 0.01, 0, cv.THRESH_TOZERO)
result8 = cv.normalize(result,None,0,255,cv.NORM_MINMAX,cv.CV_8U)
cv.imshow("result", result8)
drag_start = None
elif drag_start:
#print flags
if flags & cv.EVENT_FLAG_LBUTTON:
minpos = min(drag_start[0], x), min(drag_start[1], y)
maxpos = max(drag_start[0], x), max(drag_start[1], y)
sel = minpos[0], minpos[1], maxpos[0], maxpos[1]
img = cv.cvtColor(gray, cv.COLOR_GRAY2BGR)
cv.rectangle(img, (sel[0], sel[1]), (sel[2], sel[3]), (0,255,255), 1)
cv.imshow("gray", img)
else:
print("selection is complete")
drag_start = None

if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Demonstrate mouse interaction with images')
parser.add_argument("-i","--input", default='./', help="Input directory.")
args = parser.parse_args()
path = args.input

#cv.namedWindow("gray",1)
cv.namedWindow("gray", cv.WINDOW_AUTOSIZE)
cv.setMouseCallback("gray", onmouse)
'''Loop through all the images in the directory'''
for infile in glob.glob( os.path.join(path, '*.*') ):
ext = os.path.splitext(infile)[1][1:] #get the filename extenstion
if ext == "png" or ext == "jpg" or ext == "bmp" or ext == "tiff" or ext == "pbm":
print(infile)

img=cv.imread(infile,1)
if img.all() == None:
continue
sel = (0,0,0,0)
drag_start = None
gray1=cv.cvtColor(img, cv.COLOR_BGR2GRAY)
ret,gray=cv.threshold(gray1,127,255,cv.THRESH_BINARY)
cv.imshow("gray",gray)
if (cv.waitKey() & 255) == 27:
break
cv.destroyAllWindows()









share|improve this question


























    up vote
    0
    down vote

    favorite












    Bascially, I have the mouse_match opencv code that takes a directory path as an argument, loops over the images in the directory, then lets the user select -by mouse- a portion of an image, then performs template matching against the full image.
    It has the following undesirable functionalities:
    1. Takes input as command line arguments
    2. Returns a black image with bright spots marking the points where the highest similarity was found.



    Is it possible to create a GUI using opencv methods and classes so that it achieves:
    1. the user should be able to do multiple selections, from one image or from other images in the directory also.
    2. template matching for all objects in all images.
    3. return the directory images with bounding boxes enclosing the occurrences of selected objects.
    4. show some statistics about the occurrences, ex: each object against number of occurrences.



    Any help will be appreciated :)
    Even pesudocode-like instructions will be handy..



    This is the code that I have in hand:



    """
    mouse_and_match.py [-i path | --input path: default ./]
    Demonstrate using a mouse to interact with an image:
    Read in the images in a directory one by one
    Allow the user to select parts of an image with a mouse
    When they let go of the mouse, it correlates (using matchTemplate) that patch with the image.
    ESC to exit
    """

    import numpy as np
    from math import *
    import sys
    import os
    import glob
    import argparse
    import cv2 as cv

    drag_start = None
    sel = (0,0,0,0)

    def onmouse(event, x, y, flags, param):
    global drag_start, sel
    if event == cv.EVENT_LBUTTONDOWN:
    drag_start = x, y
    sel = 0,0,0,0
    elif event == cv.EVENT_LBUTTONUP:
    if sel[2] > sel[0] and sel[3] > sel[1]:
    patch = gray[sel[1]:sel[3],sel[0]:sel[2]]
    result = cv.matchTemplate(gray,patch,cv.TM_CCOEFF_NORMED)
    result = np.abs(result)**3
    val, result = cv.threshold(result, 0.01, 0, cv.THRESH_TOZERO)
    result8 = cv.normalize(result,None,0,255,cv.NORM_MINMAX,cv.CV_8U)
    cv.imshow("result", result8)
    drag_start = None
    elif drag_start:
    #print flags
    if flags & cv.EVENT_FLAG_LBUTTON:
    minpos = min(drag_start[0], x), min(drag_start[1], y)
    maxpos = max(drag_start[0], x), max(drag_start[1], y)
    sel = minpos[0], minpos[1], maxpos[0], maxpos[1]
    img = cv.cvtColor(gray, cv.COLOR_GRAY2BGR)
    cv.rectangle(img, (sel[0], sel[1]), (sel[2], sel[3]), (0,255,255), 1)
    cv.imshow("gray", img)
    else:
    print("selection is complete")
    drag_start = None

    if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Demonstrate mouse interaction with images')
    parser.add_argument("-i","--input", default='./', help="Input directory.")
    args = parser.parse_args()
    path = args.input

    #cv.namedWindow("gray",1)
    cv.namedWindow("gray", cv.WINDOW_AUTOSIZE)
    cv.setMouseCallback("gray", onmouse)
    '''Loop through all the images in the directory'''
    for infile in glob.glob( os.path.join(path, '*.*') ):
    ext = os.path.splitext(infile)[1][1:] #get the filename extenstion
    if ext == "png" or ext == "jpg" or ext == "bmp" or ext == "tiff" or ext == "pbm":
    print(infile)

    img=cv.imread(infile,1)
    if img.all() == None:
    continue
    sel = (0,0,0,0)
    drag_start = None
    gray1=cv.cvtColor(img, cv.COLOR_BGR2GRAY)
    ret,gray=cv.threshold(gray1,127,255,cv.THRESH_BINARY)
    cv.imshow("gray",gray)
    if (cv.waitKey() & 255) == 27:
    break
    cv.destroyAllWindows()









    share|improve this question
























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      Bascially, I have the mouse_match opencv code that takes a directory path as an argument, loops over the images in the directory, then lets the user select -by mouse- a portion of an image, then performs template matching against the full image.
      It has the following undesirable functionalities:
      1. Takes input as command line arguments
      2. Returns a black image with bright spots marking the points where the highest similarity was found.



      Is it possible to create a GUI using opencv methods and classes so that it achieves:
      1. the user should be able to do multiple selections, from one image or from other images in the directory also.
      2. template matching for all objects in all images.
      3. return the directory images with bounding boxes enclosing the occurrences of selected objects.
      4. show some statistics about the occurrences, ex: each object against number of occurrences.



      Any help will be appreciated :)
      Even pesudocode-like instructions will be handy..



      This is the code that I have in hand:



      """
      mouse_and_match.py [-i path | --input path: default ./]
      Demonstrate using a mouse to interact with an image:
      Read in the images in a directory one by one
      Allow the user to select parts of an image with a mouse
      When they let go of the mouse, it correlates (using matchTemplate) that patch with the image.
      ESC to exit
      """

      import numpy as np
      from math import *
      import sys
      import os
      import glob
      import argparse
      import cv2 as cv

      drag_start = None
      sel = (0,0,0,0)

      def onmouse(event, x, y, flags, param):
      global drag_start, sel
      if event == cv.EVENT_LBUTTONDOWN:
      drag_start = x, y
      sel = 0,0,0,0
      elif event == cv.EVENT_LBUTTONUP:
      if sel[2] > sel[0] and sel[3] > sel[1]:
      patch = gray[sel[1]:sel[3],sel[0]:sel[2]]
      result = cv.matchTemplate(gray,patch,cv.TM_CCOEFF_NORMED)
      result = np.abs(result)**3
      val, result = cv.threshold(result, 0.01, 0, cv.THRESH_TOZERO)
      result8 = cv.normalize(result,None,0,255,cv.NORM_MINMAX,cv.CV_8U)
      cv.imshow("result", result8)
      drag_start = None
      elif drag_start:
      #print flags
      if flags & cv.EVENT_FLAG_LBUTTON:
      minpos = min(drag_start[0], x), min(drag_start[1], y)
      maxpos = max(drag_start[0], x), max(drag_start[1], y)
      sel = minpos[0], minpos[1], maxpos[0], maxpos[1]
      img = cv.cvtColor(gray, cv.COLOR_GRAY2BGR)
      cv.rectangle(img, (sel[0], sel[1]), (sel[2], sel[3]), (0,255,255), 1)
      cv.imshow("gray", img)
      else:
      print("selection is complete")
      drag_start = None

      if __name__ == '__main__':
      parser = argparse.ArgumentParser(description='Demonstrate mouse interaction with images')
      parser.add_argument("-i","--input", default='./', help="Input directory.")
      args = parser.parse_args()
      path = args.input

      #cv.namedWindow("gray",1)
      cv.namedWindow("gray", cv.WINDOW_AUTOSIZE)
      cv.setMouseCallback("gray", onmouse)
      '''Loop through all the images in the directory'''
      for infile in glob.glob( os.path.join(path, '*.*') ):
      ext = os.path.splitext(infile)[1][1:] #get the filename extenstion
      if ext == "png" or ext == "jpg" or ext == "bmp" or ext == "tiff" or ext == "pbm":
      print(infile)

      img=cv.imread(infile,1)
      if img.all() == None:
      continue
      sel = (0,0,0,0)
      drag_start = None
      gray1=cv.cvtColor(img, cv.COLOR_BGR2GRAY)
      ret,gray=cv.threshold(gray1,127,255,cv.THRESH_BINARY)
      cv.imshow("gray",gray)
      if (cv.waitKey() & 255) == 27:
      break
      cv.destroyAllWindows()









      share|improve this question













      Bascially, I have the mouse_match opencv code that takes a directory path as an argument, loops over the images in the directory, then lets the user select -by mouse- a portion of an image, then performs template matching against the full image.
      It has the following undesirable functionalities:
      1. Takes input as command line arguments
      2. Returns a black image with bright spots marking the points where the highest similarity was found.



      Is it possible to create a GUI using opencv methods and classes so that it achieves:
      1. the user should be able to do multiple selections, from one image or from other images in the directory also.
      2. template matching for all objects in all images.
      3. return the directory images with bounding boxes enclosing the occurrences of selected objects.
      4. show some statistics about the occurrences, ex: each object against number of occurrences.



      Any help will be appreciated :)
      Even pesudocode-like instructions will be handy..



      This is the code that I have in hand:



      """
      mouse_and_match.py [-i path | --input path: default ./]
      Demonstrate using a mouse to interact with an image:
      Read in the images in a directory one by one
      Allow the user to select parts of an image with a mouse
      When they let go of the mouse, it correlates (using matchTemplate) that patch with the image.
      ESC to exit
      """

      import numpy as np
      from math import *
      import sys
      import os
      import glob
      import argparse
      import cv2 as cv

      drag_start = None
      sel = (0,0,0,0)

      def onmouse(event, x, y, flags, param):
      global drag_start, sel
      if event == cv.EVENT_LBUTTONDOWN:
      drag_start = x, y
      sel = 0,0,0,0
      elif event == cv.EVENT_LBUTTONUP:
      if sel[2] > sel[0] and sel[3] > sel[1]:
      patch = gray[sel[1]:sel[3],sel[0]:sel[2]]
      result = cv.matchTemplate(gray,patch,cv.TM_CCOEFF_NORMED)
      result = np.abs(result)**3
      val, result = cv.threshold(result, 0.01, 0, cv.THRESH_TOZERO)
      result8 = cv.normalize(result,None,0,255,cv.NORM_MINMAX,cv.CV_8U)
      cv.imshow("result", result8)
      drag_start = None
      elif drag_start:
      #print flags
      if flags & cv.EVENT_FLAG_LBUTTON:
      minpos = min(drag_start[0], x), min(drag_start[1], y)
      maxpos = max(drag_start[0], x), max(drag_start[1], y)
      sel = minpos[0], minpos[1], maxpos[0], maxpos[1]
      img = cv.cvtColor(gray, cv.COLOR_GRAY2BGR)
      cv.rectangle(img, (sel[0], sel[1]), (sel[2], sel[3]), (0,255,255), 1)
      cv.imshow("gray", img)
      else:
      print("selection is complete")
      drag_start = None

      if __name__ == '__main__':
      parser = argparse.ArgumentParser(description='Demonstrate mouse interaction with images')
      parser.add_argument("-i","--input", default='./', help="Input directory.")
      args = parser.parse_args()
      path = args.input

      #cv.namedWindow("gray",1)
      cv.namedWindow("gray", cv.WINDOW_AUTOSIZE)
      cv.setMouseCallback("gray", onmouse)
      '''Loop through all the images in the directory'''
      for infile in glob.glob( os.path.join(path, '*.*') ):
      ext = os.path.splitext(infile)[1][1:] #get the filename extenstion
      if ext == "png" or ext == "jpg" or ext == "bmp" or ext == "tiff" or ext == "pbm":
      print(infile)

      img=cv.imread(infile,1)
      if img.all() == None:
      continue
      sel = (0,0,0,0)
      drag_start = None
      gray1=cv.cvtColor(img, cv.COLOR_BGR2GRAY)
      ret,gray=cv.threshold(gray1,127,255,cv.THRESH_BINARY)
      cv.imshow("gray",gray)
      if (cv.waitKey() & 255) == 27:
      break
      cv.destroyAllWindows()






      python






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 22 at 12:39









      Shady.Hegazy00

      1




      1
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          Here is a program that integrates openCV with a GUI. It identifies and labels objects using YOLO.



          You will need to install PySimpleGUI for the GUI portion. It utilizes tkinter which is built into Python. To keep things really simple, PySimpleGUI is a single .py file should you not want to do pip installs.



          Video of the program running.






          share|improve this answer





















          • This is fine, but it is not what I meant. The data that I will parse is not common, and also not enough for training. So, I need to use template matching or any other non-learning algorithms.
            – Shady.Hegazy00
            Nov 27 at 7:07










          • I'm suggesting a GUI solution, not a solution to your object finding. It's a method to give you some GUI controls around an OpenCV window. It wasn't meant to be an attempt at the the algorithm you're writing.
            – MikeyB
            Nov 28 at 3:13











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53431249%2fopencv-python-can-i-transform-this-simple-code-into-a-gui-application-using-op%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote













          Here is a program that integrates openCV with a GUI. It identifies and labels objects using YOLO.



          You will need to install PySimpleGUI for the GUI portion. It utilizes tkinter which is built into Python. To keep things really simple, PySimpleGUI is a single .py file should you not want to do pip installs.



          Video of the program running.






          share|improve this answer





















          • This is fine, but it is not what I meant. The data that I will parse is not common, and also not enough for training. So, I need to use template matching or any other non-learning algorithms.
            – Shady.Hegazy00
            Nov 27 at 7:07










          • I'm suggesting a GUI solution, not a solution to your object finding. It's a method to give you some GUI controls around an OpenCV window. It wasn't meant to be an attempt at the the algorithm you're writing.
            – MikeyB
            Nov 28 at 3:13















          up vote
          0
          down vote













          Here is a program that integrates openCV with a GUI. It identifies and labels objects using YOLO.



          You will need to install PySimpleGUI for the GUI portion. It utilizes tkinter which is built into Python. To keep things really simple, PySimpleGUI is a single .py file should you not want to do pip installs.



          Video of the program running.






          share|improve this answer





















          • This is fine, but it is not what I meant. The data that I will parse is not common, and also not enough for training. So, I need to use template matching or any other non-learning algorithms.
            – Shady.Hegazy00
            Nov 27 at 7:07










          • I'm suggesting a GUI solution, not a solution to your object finding. It's a method to give you some GUI controls around an OpenCV window. It wasn't meant to be an attempt at the the algorithm you're writing.
            – MikeyB
            Nov 28 at 3:13













          up vote
          0
          down vote










          up vote
          0
          down vote









          Here is a program that integrates openCV with a GUI. It identifies and labels objects using YOLO.



          You will need to install PySimpleGUI for the GUI portion. It utilizes tkinter which is built into Python. To keep things really simple, PySimpleGUI is a single .py file should you not want to do pip installs.



          Video of the program running.






          share|improve this answer












          Here is a program that integrates openCV with a GUI. It identifies and labels objects using YOLO.



          You will need to install PySimpleGUI for the GUI portion. It utilizes tkinter which is built into Python. To keep things really simple, PySimpleGUI is a single .py file should you not want to do pip installs.



          Video of the program running.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 22 at 16:14









          MikeyB

          67349




          67349












          • This is fine, but it is not what I meant. The data that I will parse is not common, and also not enough for training. So, I need to use template matching or any other non-learning algorithms.
            – Shady.Hegazy00
            Nov 27 at 7:07










          • I'm suggesting a GUI solution, not a solution to your object finding. It's a method to give you some GUI controls around an OpenCV window. It wasn't meant to be an attempt at the the algorithm you're writing.
            – MikeyB
            Nov 28 at 3:13


















          • This is fine, but it is not what I meant. The data that I will parse is not common, and also not enough for training. So, I need to use template matching or any other non-learning algorithms.
            – Shady.Hegazy00
            Nov 27 at 7:07










          • I'm suggesting a GUI solution, not a solution to your object finding. It's a method to give you some GUI controls around an OpenCV window. It wasn't meant to be an attempt at the the algorithm you're writing.
            – MikeyB
            Nov 28 at 3:13
















          This is fine, but it is not what I meant. The data that I will parse is not common, and also not enough for training. So, I need to use template matching or any other non-learning algorithms.
          – Shady.Hegazy00
          Nov 27 at 7:07




          This is fine, but it is not what I meant. The data that I will parse is not common, and also not enough for training. So, I need to use template matching or any other non-learning algorithms.
          – Shady.Hegazy00
          Nov 27 at 7:07












          I'm suggesting a GUI solution, not a solution to your object finding. It's a method to give you some GUI controls around an OpenCV window. It wasn't meant to be an attempt at the the algorithm you're writing.
          – MikeyB
          Nov 28 at 3:13




          I'm suggesting a GUI solution, not a solution to your object finding. It's a method to give you some GUI controls around an OpenCV window. It wasn't meant to be an attempt at the the algorithm you're writing.
          – MikeyB
          Nov 28 at 3:13


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53431249%2fopencv-python-can-i-transform-this-simple-code-into-a-gui-application-using-op%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Contact image not getting when fetch all contact list from iPhone by CNContact

          count number of partitions of a set with n elements into k subsets

          A CLEAN and SIMPLE way to add appendices to Table of Contents and bookmarks