Coder Social home page Coder Social logo

Comments (11)

jhdewitt avatar jhdewitt commented on July 24, 2024

Hello, thank you for your kind comment. Setup guide is notably absent as of yet. Hoping to get a draft up soon in the wiki; perhaps this would be ok.

from sltk.

jhdewitt avatar jhdewitt commented on July 24, 2024

Is there a hardware configuration that you are hoping to use? So far the primary hardware has consisted of a raspberry pi with stock camera and picoprojector on a fixed rigid mount, so this is the configuration I will be focusing on for setup.

from sltk.

prahjister avatar prahjister commented on July 24, 2024

I really don't know the full scope based on what I have read but an octoprint plugin would be fantastic. I was thinking the same thing with a pico projector. Octoprint already has the webcam streamer working well out of the box. It has pins to control a turntable. armbian as well and it runs on a lot of different hardware with pins available that is much faster than a raspberry pi. They are very accommodating at armbian. I have a ton of questions and if you would entertain them I am available on Skype. This is pretty exciting stuff

from sltk.

jhdewitt avatar jhdewitt commented on July 24, 2024

3D printing is a science/art that I have no direct experience with, though it often catches my attention. Perhaps this reading material will be of interest to you: https://3dprint.com/9952/mit-researchers-smart-3d-printer/ Is measuring print dimensions a desired use case?

Anyways, this is a rough draft of a setup/howto: https://github.com/jhdewitt/sltk/wiki/Hardware-Setup-Guide

feedback is appreciated!

As for questions, if you are not opposed I would prefer to address most of them in a public form so that anyone else with similar questions can benefit from your curiosity. I'll happily try to answer any here.

from sltk.

jhdewitt avatar jhdewitt commented on July 24, 2024

http://docs.octoprint.org/en/master/plugins/gettingstarted.html

Seems like octoprint plugins are python programs? Seems pretty approachable. Not sure how it would fit into everything, but the idea is interesting.

from sltk.

prahjister avatar prahjister commented on July 24, 2024

Here is first batch of questions

Can all these functions run on raspberry pi
Should the alignment be off-loaded to faster computer for better performance
Can 2 or more cameras be used to speed up the process
Have you tested with lower end projector
Can you write a list of commands with example switches from beginning to end
Can you write detailed steps for compiling tools.
What influence does projector resolution have scan.
What influence does camera resolution have on a scan
How big how small can it scan
Resolution details of final mesh vs scanned object
Best ambient lighting conditions
Post processing tools?
Details on auto turntable
Does the calibration of the camera need to fill up entire field of view and does this dictate the focal point of scanned part
How often need to calibrate? If setup is taken down then roughly put back up does a recalibration need to happen
Do you think a handheld unit be made if small projector is used and just pass over object slowly.

from sltk.

jhdewitt avatar jhdewitt commented on July 24, 2024

Oh boy!

1. Can all these functions run on raspberry pi
Yes; athough macos was the main development environment, I tried to make sure it all compiled fine on my rpi.

2. Should the alignment be off-loaded to faster computer for better performance
Capturing is pretty low stress, just writing JPG files. Processing each set of images (slcrunch,slcalibrate) is more intensive and would still benefit from faster computer. I prefer to run slcapture.py on my laptop which is able to process the data faster, leaving the raspberry pi as a raw data source of JPG files.

3. Can 2 or more cameras be used to speed up the process
More cameras makes it more likely that every available projected pixel will be seen and reconstructed, resulting in more robust point cloud density. Perhaps the increased coverage could provide slack for speeding up some other part of capture. Haven't done extensive testing with multiple cameras yet, though it would be modeled as many-to-one (many camera to one projector) if explicit support is added in the future. For now, multi camera scaling is limited by network bandwidth because the jpg stream is done live to the control program slcapture.py rather than storing images locally on the camera computer and then transferring at end of sequence (this would likely allow for faster capture).

4. Have you tested with lower end projector
Yes; it's surprisingly good at reconstructing the lower frequency patterns. The most common issue was with getting the focus sharp enough so that the camera could read the higher frequency patterns. [image example of spatial frequency high/low] Quickly alternating black/white gray code patterns are a major limiting factor when it comes to getting the finest possible detail. This is because if the fine patterns are blurred, no difference will be found between the pattern and the inverse (X - (1-X)) and the pixel will be rejected as noise. There's a manual setting for this in slcrunch "-b " which will allow the last N -bits of the sequence to be allowed to fail so that less sharp projectors won't produce empty correspondence maps.

5. Can you write a list of commands with example switches from beginning to end
Will compile a list of example commands and add this to the wiki. Plan is for some shell scripts to handle the common cases; e.g.

  • CRUNCHCMD="slcrunch -prefix process/ -vis -yaml -s 5.946 -chessidx 23 -chess 12 6 -prosize 1920x1080 -c -b 1"
    this command format is for processing the images of chessboards for calibrating the projector lens
  • CRUNCHCMD="slcrunch -vis -b 1 -c -prefix process/ -ply -procam"
    this command format is for processing the images of graycode patterns + cal data into pointclouds

Sorry to be vague on these; for now there's some more info in the comments:
slcrunch argument options
slcalibrate argument options

6. Can you write detailed steps for compiling tools.
In previous versions, I had individual compile shell scripts for each program :) thankfully those days are past. The makefile should work on recent macos/linux distributions, but I've only tested it on my machine. If you have make installed, it should work to cd into the project directory and type make.
Will update wiki with more detailed instructions.

7. What influence does projector resolution have scan.
Number of projector pixels directly determines possible number of output cloud points. More pixels = more spatial resolution to address world space. Angular resolution should ideally be matched with camera (e.g. 1px/arcsec).

8. What influence does camera resolution have on a scan
Number of camera pixels directly determines possible number of output cloud points. More pixels = more spatial resolution to address world space. Angular resolution should ideally be matched with projector (e.g. 1px/arcsec).

9. How big how small can it scan
This depends entirely on the camera and projector selected; sorry if this sounds like a cop out. Low end projectors have softer images (weaker high frequency output), so getting satisfactory spatial resolution on "small" items can become intractable. In my experience, rpi+sony setup was able to get "useful" detail on thumb sized objects (~5cm) up to living room size (~5m). Range is largely determined by the projector's brightness and sharpness, and how sensitive and how sharp the camera is.

10. Resolution details of final mesh vs scanned object
Final mesh might need to be decimated significantly for mobile use. With 1MP projector (720p) and 5MP/8MP camera, up to 1,000,000 points can be captured in one sequence. 50-250K points is more realistic for that setup depending on how much of the frame the object takes up and how underexposed it is. With ~40 scan sequences per rotation, raw point aggregate can reach into low millions of vertex. Generally I reduce geometric detail in postprocessing and project texture etc.

11. Best ambient lighting conditions
Minimizing ambient light can have a big impact on the number of reconstructed points. Ideally, in addition to the graycode patterns for 3d reconstruction, a few additional flat color frames should be collected. So, display all the graycode patterns and then fullscreen [white,gray,black,red,green,blue] to capture the maximum color data. For now, color data is extracted from the graycode sequence data.

12. Post processing tools?
Check out meshlab, cloudcompare, agisoft photoscan, allegorithmic substance painter

13. Details on auto turntable
I wrote some janky arduino firmware to receive packets of [direction, number of steps] that would trigger stepper motor motion. Haven't uploaded this, so technically auto turn scanning is inoperable at this point. Do you know of any standard USB communication format for turntable devices that I could adopt?

14. Does the calibration of the camera need to fill up entire field of view and does this dictate the focal point of scanned part
Yes; using chessboard is hard to get good data all the way to edge of FoV. For this reason I use flatscreen LCD panel to capture dense camera correspondence with calibration object (pixel pitch on displays are known, so pixel address x,y * pixel pitch = real world XYZ because Z=0)

With chessboard camera calibration method yes this can easily crop up and dictate the usable extent of the field of view (ditate focal point). With flatscreen camera calibration method it's very very easy to fill the entire field of view with pixels, which will result in a file that accurately describes the entire field of view. All this assumes a camera with fixed focus,focal length/etc.

15. How often need to calibrate? If setup is taken down then roughly put back up does a recalibration need to happen
Calibration consists of intrinsic and extrinsic parts. Intrinsic doesn't change on fixed focus hardware. Extrinsic changes whenever camera/projector move relative to each other.
For now the software assumes that the calibration file contains accurate extrinsic data (rotation+translation) between camera and projector. If the camera/projector are moved, the previous calibration file can be reused (intrinsics won't change), but a new chessboard image must be taken so that the rotation+translation can be updated. This warrants more info in wiki because calibration can be ominous.

16. Do you think a handheld unit be made if small projector is used and just pass over object slowly.
This software was made with the implicit assumption that the projector/camera are stationary with respect to each other, as well as with respect to the object/environment. The time required to project and capture ~40 patterns on raspberry pi can be as low as 1 minute. This is with gray code patterns. Using sinusoidal patterns could bring down this time further, but my honest opinion is that it would be a challenge to get it working robustly with raspberry pi level hardware. A stationary head + auto turntable is the intended best fit.

Hope this somewhat addressed your inquiry.

from sltk.

prahjister avatar prahjister commented on July 24, 2024

One quick thought for the turn table and the way I would like to use this is to use the existing print bed as the scanning area and hijack one of the stepper motors. Extend 4 wires from existing stepper motor to a spare.Then I believe it would be trivial to send gcode command's to rotate the plate. This is assuming that everything is running on octoprint server.

I have raspberry pi with octoprint but will setup on octoprint instance that I have running on s905x box because it is way faster.

I was also thinking about getting a vim s905x sbc. Gpio pins are available like raspberry pi.

I have all the hardware for testing. Native 1080 projector (but huge), s905x box for octoprint, Logitech c270, screen.

Maybe an esp8266 with mqtt and a stepper driver. (I know it complicates things more)

For some of the hardware I would probably design and host on thingiverse.

from sltk.

prahjister avatar prahjister commented on July 24, 2024

Einscan states.1mm accuracy. What do you think the accuracy you get with your setup? I have only seen the shell scan and it was probably decimated to reduce size.

How do you think scans compare to einscan and other semi professional structured light scanners

from sltk.

jhdewitt avatar jhdewitt commented on July 24, 2024

The shell linked in the README is 7 inches tall from ground to highest point. I'll try to estimate the inter-point distance on that model tonight and get back to you. This is another model made using raspi homebrew setup: https://sketchfab.com/models/ad622111edc448d1a349ca66f641e73b it is ~45mm tall. Perhaps this will help address accuracy. I'm quite certain it's <1mm but can't confidently give a lower bound at time.

This is something that should probably be added to automatically output during the calibration step. With a known size reference object, this should be easy to implement.

I don't have any experience with einscan. In my opinion, with both 1080p camera and 1080p projector and thoughtful setup, similar results can be achieved to prosumer/semipro white light scanners. If you can properly expose your subject and get focal planes lined up the results should be pleasing. In my experience, aligning and merging the resulting point clouds is a bigger challenge than ensuring dense input clouds to start. Some objects such as pots are geometrically rotationally symmetric and demand something like visual keypoint matching which is not implemented in this project yet.

If you have any other unaddressed queries, I'm happy to field them in this thread.

from sltk.

prahjister avatar prahjister commented on July 24, 2024

Thank you so much. Enough talking.....will start trying to setup this evening.

from sltk.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.