That being said I got a new toy for Christmas, well I bought it, it's Agisoft Photoscan. I have been going wild taking pictures and scanning them in. That's when I noticed a major shortcoming of the workflow. Agisoft's scans are pretty crude sometimes and their textures and unwrapping leave a lot to be desired. So I figured out that you can export Collada out of Agisoft and that gives you all of the camera positions. I then was able to correlate the field of views and magnification for each camera back on the object. What this allowed me to do is retopolgize the scans, unwrap them and then retexture the resultant object using texture painting in blender. since this was somewhat tedious I decided to write a python script that automates the camera painting projection setup. I finally got it working pretty well, below are a few objects I've scanned and started down this workflow path on.
Below is another example I'm in the middle of. The Chair on the left is the scan, the chair on the right is my retopo of the scan (yes you can see I'm not done with the retopo yet as I just started on the scrolls. After I am donw with them, I will then texture paint the scan onto the new chair when I'm done.