Featured post

c# - Usage of Server Side Controls in MVC Frame work -

i using asp.net 4.0 , mvc 2.0 web application. project requiremrnt have use server side control in application not possibl in noraml case. ideally want use adrotator control , datalist control. i saw few samples , references in codepleax mvc controllib howwver found less useful. can tell how utilize theese controls in asp.net application along mvc. note: please provide functionalities related adrotator , datalist controls not equivalent functionalities thanks in advace. mvc pages not use normal .net solution makes use of normal .net components impossible. a normal .net page use event driven solution call different methods service side mvc use actions , view completly different way handle things. also, mvc not use viewstate normal .net controlls require. found article discussing mixing of normal .net , mvc.

Python OpenCV: Detecting a general direction of movement? -


i'm still hacking book scanning script, , now, need able automagically detect page turn. book fills 90% of screen (i'm using cruddy webcam motion detection), when turn page, direction of motion in same direction.

i have modified motion-tracking script, derivatives getting me nowhere:

#!/usr/bin/env python  import cv, numpy  class target:     def __init__(self):         self.capture = cv.capturefromcam(0)         cv.namedwindow("target", 1)      def run(self):         # capture first frame size         frame = cv.queryframe(self.capture)         frame_size = cv.getsize(frame)         grey_image = cv.createimage(cv.getsize(frame), cv.ipl_depth_8u, 1)         moving_average = cv.createimage(cv.getsize(frame), cv.ipl_depth_32f, 3)         difference = none         movement = []          while true:             # capture frame webcam             color_image = cv.queryframe(self.capture)              # smooth rid of false positives             cv.smooth(color_image, color_image, cv.cv_gaussian, 3, 0)              if not difference:                 # initialize                 difference = cv.cloneimage(color_image)                 temp = cv.cloneimage(color_image)                 cv.convertscale(color_image, moving_average, 1.0, 0.0)             else:                 cv.runningavg(color_image, moving_average, 0.020, none)              # convert scale of moving average.             cv.convertscale(moving_average, temp, 1.0, 0.0)              # minus current frame moving average.             cv.absdiff(color_image, temp, difference)              # convert image grayscale.             cv.cvtcolor(difference, grey_image, cv.cv_rgb2gray)              # convert image black , white.             cv.threshold(grey_image, grey_image, 70, 255, cv.cv_thresh_binary)              # dilate , erode object blobs             cv.dilate(grey_image, grey_image, none, 18)             cv.erode(grey_image, grey_image, none, 10)              # calculate movements             storage = cv.creatememstorage(0)             contour = cv.findcontours(grey_image, storage, cv.cv_retr_ccomp, cv.cv_chain_approx_simple)             points = []              while contour:                 # draw rectangles                 bound_rect = cv.boundingrect(list(contour))                 contour = contour.h_next()                  pt1 = (bound_rect[0], bound_rect[1])                 pt2 = (bound_rect[0] + bound_rect[2], bound_rect[1] + bound_rect[3])                 points.append(pt1)                 points.append(pt2)                 cv.rectangle(color_image, pt1, pt2, cv.cv_rgb(255,0,0), 1)              num_points = len(points)              if num_points:                 x = 0                 point in points:                     x += point[0]                 x /= num_points                  movement.append(x)              if len(movement) > 0 , numpy.average(numpy.diff(movement[-30:-1])) > 0:               print 'left'             else:               print 'right'              # display frame user             cv.showimage("target", color_image)              # listen esc or enter key             c = cv.waitkey(7) % 0x100             if c == 27 or c == 10:                 break  if __name__=="__main__":     t = target()     t.run() 

it detects average motion of average center of of boxes, extremely inefficient. how go detecting such motions , accurately (i.e. within threshold)?

i'm using python, , plan stick it, whole framework based on python.

and appreciated, thank in advance. cheers.

i haven't used opencv in python before, bit in c++ openframeworks.

for presume opticalflow's velx,vely properties work.

for more on how optical flow works check out this paper.

hth


Comments

Popular posts from this blog

c# - Usage of Server Side Controls in MVC Frame work -

cocoa - Nesting arrays into NSDictionary object (Objective-C) -

ios - Very simple iPhone App crashes on UILabel settext -