multithreading - C++ app performance varies when using threads -
i have c++ app 2 threads. app displays gauge on screen, indicator rotates based on angle received via udp socket. problem indicator should rotating @ constant rate behaves time slows down @ times, , fast-forwards catch @ other times, pauses intermittently.
each frame, display (main) thread guards copy of angle udp thread. udp thread guards writing shared variable. use windows criticalsection object guard 'communication' between threads. udp packet received @ approximately same rate display update. using windows 7, 64 bit, 4-core processor.
i using separate python app broadcast udp packet. use python function, time.sleep, keep broadcast @ constant rate.
why application slow down? why application seem fast-forward instead of snapping latest angle? proper fix?
edit: not 100% sure angle values remembered when app seems 'fast forward'. app snapping value (not sure if 'latest') @ times.
edit 2: per request, code.
void app::udp_update(dword thread_id) { packet p; _socket.recv(p); // edit: blocks until transmission received { locker lock(_cs); _packet = p; } } void app::main_update() { float angle_copy = 0.f; { locker lock(_cs); angle_copy = _packet.angle; } draw(angle_copy); // edit: blocks until monitor refreshes }
thread.h
class cs { private: friend locker; critical_section _handle; void _lock(); void _unlock(); // not implemented design cs(cs&); cs& operator=(const cs&); public: cs(); ~cs(); }; class locker { private: cs& _cs; // not implemented design locker(); locker(const locker&); locker& operator=(const locker&); public: locker(cs& c) : _cs(c) { _cs._lock(); } ~locker() { _cs._unlock(); } }; class win32threadpolicy { public: typedef functor<void,typelist_1(dword)> callback; private: callback _callback; //security_descriptor _sec_descr; //security_attributes _sec_attrib; handle _handle; //dword _exitvalue; #ifdef use_begin_api unsigned int _id; #else // use_begin_api dword _id; #endif // use_begin_api /*volatile*/ bool _is_joined; #ifdef use_begin_api static unsigned int winapi threadproc( void* lpparameter ); #else // use_begin_api static dword winapi threadproc( lpvoid lpparameter ); #endif // use_begin_api dword _run(); void _join(); // not implemented design win32threadpolicy(); win32threadpolicy(const win32threadpolicy&); win32threadpolicy& operator=(const win32threadpolicy&); public: win32threadpolicy(callback& func); ~win32threadpolicy(); void spawn(); void join(); }; /// helps manage parallel operations. /// attempts mimic c++11 std::thread interface, passes thread id. class thread { public: typedef functor<void,typelist_1(dword)> callback; typedef win32threadpolicy platformpolicy; private: platformpolicy _platform; /// not implemented design thread(); thread(const thread&); thread& operator=(const thread&); public: /// begins parallel execution of parameter, func. /// \param func, function object executed. thread(callback& func) : _platform(func) { _platform.spawn(); } /// stops parallel execution , joins main thread. ~thread() { _platform.join(); } };
thread.cpp
#include "thread.h" void cs::_lock() { ::entercriticalsection( &_handle ); } void cs::_unlock() { ::leavecriticalsection( &_handle ); } cs::cs() : _handle() { ::memset( &_handle, 0, sizeof(critical_section) ); ::initializecriticalsection( &_handle ); } cs::~cs() { ::deletecriticalsection( &_handle ); } win32threadpolicy::win32threadpolicy(callback& func) : _handle(null) //, _sec_descr() //, _sec_attrib() , _id(0) , _is_joined(true) , _callback(func) { } void win32threadpolicy::spawn() { // example of managing descriptors, see: // http://msdn.microsoft.com/en-us/library/windows/desktop/aa446595%28v=vs.85%29.aspx //bool success_descr = ::initializesecuritydescriptor( &_sec_descr, security_descriptor_revision ); //todo: want start create_suspended ? // todo: wrap exception handling #ifdef use_begin_end // http://msdn.microsoft.com/en-us/library/kdzttdcb%28v=vs.100%29.aspx _handle = (handle) _beginthreadex( null, 0, &thread::threadproc, this, 0, &_id ); #else // use_begin_end _handle = ::createthread( null, 0, &win32threadpolicy::threadproc, this, 0, &_id ); #endif // use_begin_end } void win32threadpolicy::_join() { // signal thread should complete _is_joined = true; // maybe ::wfso not best solution. // "except waitforsingleobject , big brother waitformultipleobjects dangerous. // basic problem these calls can cause deadlocks, // if ever call them thread has own message loop , windows." // http://marc.durdin.net/2012/08/waitforsingleobject-why-you-should-never-use-it/ // // advises use msgwaitformultipleobjects instead: // http://msdn.microsoft.com/en-us/library/windows/desktop/ms684242%28v=vs.85%29.aspx dword result = ::waitforsingleobject( _handle, infinite ); // _handle must have thread_query_information security access enabled use following: //dword exitcode = 0; //bool success = ::getexitcodethread( _handle, &_exitvalue ); } win32threadpolicy::~win32threadpolicy() { } void win32threadpolicy::join() { if( !_is_joined ) { _join(); } // example shows correct pass handle returned createthread // http://msdn.microsoft.com/en-us/library/windows/desktop/ms682516%28v=vs.85%29.aspx ::closehandle( _handle ); _handle = null; } dword win32threadpolicy::_run() { // todo: need make sure _id has been assigned? while( !_is_joined ) { _callback(_id); ::sleep(0); } // todo: should return? return 0; } #ifdef use_begin_end unsigned int winapi thread::threadproc( lpvoid lpparameter ) #else // use_begin_end dword winapi win32threadpolicy::threadproc( lpvoid lpparameter ) #endif // use_begin_end { win32threadpolicy* tptr = static_cast<win32threadpolicy*>( lpparameter ); tptr->_is_joined = false; // when function (threadproc) returns, ::exitthread used terminate thread "implicit" call. // http://msdn.microsoft.com/en-us/library/windows/desktop/ms682453%28v=vs.85%29.aspx return tptr->_run(); }
i know bit in assumption space but:
the rate talking set in "server" , "client" via sleep controls speed packets sent. not rate of actual transmission, os can schedule processes in asymmetric way (time wise).
this can mean when server gets more time, fill os buffer packets (the client less processor time, thus, consumming @ lower rate => slowing down meter). then, when client gets more time server, consume fast packets, while update thread still waiting. doesn't mean "snap", because using critical section lock packet update, don't consume many packages os buffer until new update. (you may have "snap to", small step). basing on fact see no actual sleeping in receive or update methods (the sleep done on server side).
Comments
Post a Comment