Coder Social home page Coder Social logo

waveform80 / picamera Goto Github PK

View Code? Open in Web Editor NEW
1.6K 99.0 359.0 62.53 MB

A pure Python interface to the Raspberry Pi camera module

Home Page: https://picamera.readthedocs.io/

License: BSD 3-Clause "New" or "Revised" License

Makefile 0.80% Python 99.20%
raspberry-pi raspberry-pi-camera python

picamera's Issues

Segmentation of video files

Now there's a cool idea. Permit segmentation of video files by parameters to start_recording(). We should permit segmentation by time elapsed, by bytes written, and potentially by callback (the former two can be implemented as fixed callbacks if we implement the latter).

How to manage output filenames? Should we use a format-driven method like capture_continuous, or an iterator-driven method like capture_sequence (or perhaps find a way to permit both? E.g. if output is a string, and segmentation is requested assuming it's a format-string, otherwise assume it's an iterator of strings/streams).

If iterator-driven, what happens when we reach the end of the iterator (if it's finite)? Should add a "cycle" parameter (when False, terminate recording, when True, cycle back to first element of the iterator - can use itertools to cache the cycle in this case).

Best use of circular buffer

Hi Dave

You are a mindreader! Twice you added features that I thought would be nice to have. It's great to see such active development of Picamera. I hope you can find the time to provide some info on how to best use Picamera.

Let me first explain what I want to achieve: I want to create motion triggered video capture device (for security purposes). I want some time (say 5-10 seconds) before the motion trigger to be recorded as well.

My first approach was use the recent addition to Picamera of image capture during video recording. I continuously record 10 second sections of video to memory (split_recording) and also capture images every 0.3 seconds to memory (use_video_port=True). I use the images to detect motion (more or less counting the changed pixels). If motion is detected I save the current video section and possibly the next section of video to disk. This works, I get sections of video around the movement, but the program wasn't very elegant.

Then you read my mind again and introduce the circular buffer. This promises to make the video recording a bit more streamlined. The problem is that I can't get my head around the circular buffer reading (pun intended). I noticed when testing with the provided 2nd example, modified to write to a new file each time instead of overwrite the same file, that one gets video with overlapping events, if the files are written more frequently than the buffer duration. This is to be expected I guess, because writing starts at the first available frame which may also be present in the previous video.
So I guess my question, at long last, is: how can I best apply the circular buffer for my purpose?

  • How can I start video recording and keep recording until x seconds after the last detected motion (assuming I will keep detecting motion in the same way), or
  • If I keep writing sections of video, how to start writing to disk not from the first available frame, but from the next frame after the previously written video?

Thanks for your patience and insights.

Marcel

Control camera LED

The library should include an optional property for controlling the camera's LED via the GPIO pins. Although this will introduce a dependency on a GPIO library, this can be made optional (i.e. have the attribute throw an error in the case a suitable GPIO library is not available).

correction for Documentation

Hi,
thanks for the great work !
I read the documentation on ReadTheDocs.org and just wanted to say that the ISO parameter does work. The values supported are between 100 and 800, it does make a difference (I did some test with a relatively low-light environment and the pictures with low ISO where very dark, and much better with higher ISO).

I guess this is probably not the best way to contact you but I couldn't find any other way ? (I'm new to GitHub)

Permit video-port-based capture while recording

Several people at the Manchester Raspberry Pi jam expressed interest in capturing stills while recording video (not such a niche piece of functionality as I'd originally thought). Whilst this is currently possible with picamera there are several drawbacks: the stills are a different resolution (may/may not be a bad thing), and the video encoder drops frames whilst capturing stills (definitely a bad thing). Furthermore, attempting to perform a video-port-based capture while recording results in a very confusing exception (again, a bad thing).

It may be possible to strap a splitter over the video port, then use one of its outputs for the video encoder, and another for a JPEG encoder (permitting video-port-based still capture while recording video without dropping frames ... maybe).

Add ability to capture raw camera data

Picamera's current "raw" capture capabilities only extend as far as outputting raw YUV or RGB data (i.e. the same thing raspistillyuv does), not the equivalent of raspistill -raw (which outputs a JPEG along with "truly" raw data from the camera prior to various bits of processing being performed). There's apparently some interest in getting access to "true" raw output and it doesn't look difficult so this should be added for the next release.

Need to consider the best method of doing this though: are people interested in JPEG+raw output (as with raspistill -raw) or do they just care about the raw bit (in which case it's a waste of time outputting the JPEG).

Crop is broken in several ways

The PiCamera.crop property has several issues:

  • The getter is broken (complains that 'MMAL_PARAMETER_INPUT_CROP_T' object does not support indexing)
  • When dealing with non-square crops (which squish the preview), resetting the crop is ... difficult. Setting the original value (e.g. 0.0, 0.0, 1.0, 1.0) doesn't actually restore the full view, nor does it restore the pixel's aspect ratio.

Need further investigation on this but it may turn out that we have to retire crop and convert it into a straight "zoom" parameter. That would require targetting for 1.0 though to avoid breaking anyone's code.

Investigate MMAL resizer component

This post from MacHack references the vc.ril.resize MMAL component (a GPU-side frame resizer) which can be used to provide full-frame video (and preview perhaps?) output, albeit at a slower 15fps.

Firstly we need to come up with a way of permitting full-frame video capture (nice interface? Maybe another parameter to start_recording like use_resizer?), and secondly we need to investigate the possibility of using this with previews. If it works with previews, it could solve the disparity between previews and still captures (I don't think anyone is going to particularly care about 30fps vs 15fps previews - although it might be worth polling the forums about this).

Investigate using a component to perform RGB conversion

The video splitter component does not support RGB format. This means that since release 0.8, video-port-based raw captures can only be in YUV format. This is probably quite a niche requirement, but it's not good to remove functionality and hence we should strive to re-introduce this capability before a 1.0 release.

Raw capture with resizer is broken

I'm attempting to write a program that captures a raw stream from the video port and passes it to OpenCV, but the program fails when trying to capture the stream. I know it's possible, because before I switched to using Python, I was using a C++ class that could do this. At least, I think that's what it does.

It's definitely not an OpenCV error, as the program dies before any OpenCV functions are called.

Here's the error:

Traceback (most recent call last):
    File "main.py", line 122, in <module>
        main()
    File "main.py", line 117, in main
        frame = cam.read()
    File "/home/pi/2014-Vision/src/picam.py", line 16, in read
        self.cam.capture(stream, 'rgba', True, (320, 240))
    File "/usr/lib/python2.7/dist-packages/picamera/camera.py", line 859, in capture
        if not encoder.wait(30):
    File "/usr/lib/python2.7/dist-packages/picamera/encoders.py", line 326, in wait
        raise self.exception
AssertionError

And the code:

from __future__ import division

import picamera
import cv2
import numpy as np
import io
import time

class PiCam:

    def __init__(self):
        self.cam = picamera.PiCamera()

    def read(self):
        stream = open('image.data', 'wb')
        self.cam.capture(stream, 'bgra', True, (320, 240))
        image = np.fromfile(stream, dtype = np.uint8)
        return image

cam = PiCam()
cv2.namedWindow("Camera Image", cv2.WINDOW_AUTOSIZE)
while True:
    cv2.imshow("Camera Image", cam.read())
    cv2.waitKey(1)
    return

Debian packaging

The major question with this is what's the process for getting it accepted upstream (or is there an equivalent to Ubuntu's PPAs?)

MJPEG recording locks up camera

Under certain circumstances (resolution related?), MJPEG recording fails to stop. Looks like an explicit end capture signal is required (opposite of PiCamera._enable_port)

Raw YUV capture example failing

jbeale reports that the YUV raw capture example fails at the point numpy attempts to read from the stream. I've replicated the issue in python 2 (in python 3 it just segfaults - another numpy issue or a bad build?) - although for the life of me I could swear this used to work. Numpy doesn't even seem to like loading the data from an opened file object. I can only get it working with a filename; that's doubly strange because with a filename the example never could've worked (the file would always start reading from 0) and I've definitely had this working in the past. Is this an upstream issue with some recent release of numpy? Need to investigate numpy's source...

Add raw capture method

Looks like raw capture is useful for several things so we ought to support it, presumably as another capture method ("capture_raw"?)

Split after video-port capture raises timeout error

BorisS ran into an issue with the splitter - it appears that attempting multiple recording splits results in a header wait timeout. Investigate whether this only happens at full-frame or below (resizer usage makes no difference), and whether an SPS header can be forced by split_recording (there's an MMAL parameter for this - I forget off the top of my head, but I think the latest branch of raspivid uses it under certain circumstances).

Fix inaccurate Live preview

While it's flagged in the documentation, the suggested workaround of setting the camera to full resolution doesn't appear to get the live preview and taken image to match.

For context, I'm creating a stop motion animation package. The 'onion-skinning' effect is working perfectly with the Alpha preview laid over the top of the last image taken (via Pygame), but since the live preview isn't an accurate representation of the actual image, it's impossible to align them.

If anyone has got a suggestion on how to post-process the image via PIL / Pygame in the short-term, I'd really appreciate the input. Any information on if/when there might be a fix / workaround would be really appreciated.

Thanks for the library - it's otherwise fantastic!

Finish documentation

Specifically, we need examples demonstrating integration with popular imaging libraries - PIL and OpenCV would be good. For example: capture image with camera into a PIL object without touching the disk (e.g. via BytesIO), and demonstrate something similar (e.g. some rudimentary processing) with OpenCV.

Also, the new use_video_port parameter could do with a section to itself demonstrating how to obtain the maximum capture framerate, what the differences between the still and video ports are (resolution and quality wise), how to use generator functions with capture_sequence, etc.

Remove continuous

Just a reminder to remove the deprecated continuous method for 1.0

Add circular buffer for recording

There's a rather cool patch that's just landed in raspivid (raspberrypi/userland#132) which permits recording to a circular in-memory buffer and then permitting the last n seconds (where n <= the length of the buffer) to be dumped to disk on a key-press.

It's currently possible to implement this in picamera, but non-trivial as it'd involve firstly building a custom stream to replicate the circular buffer, and then dealing with the MMAL interface to figure out the location of the SPS headers and I-frames (actually the latter could probably be done with the new frame property, but not the former currently).

We should make this easier for new users firstly by providing such a circular stream in the package, and secondly by making it easier to query the location of SPS headers.

Continuous capture not working

I have slightly modified on of the examples by commenting out the preview because I'm running my raspberry headless:

import time
import picamera
with picamera.PiCamera() as camera:
    #camera.start_preview()
    try:
        for i, filename in enumerate(camera.continuous('image{counter:02d}.jpg')):
            print(filename)
            time.sleep(1)
            if i == 10:
                break
    finally:
        #camera.stop_preview()
        pass

The output is:

ssh://[email protected]:22/home/pi/picam/bin/python -u /home/pi/picam/cont_capture.py
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
image{counter:02d}.jpg
Traceback (most recent call last):
File "/home/pi/picam/cont_capture.py", line 6, in
for i, filename in enumerate(camera.continuous('image{counter:02d}.jpg')):
File "/home/pi/picam/local/lib/python2.7/site-packages/picamera/init.py", line 1118, in continuous
self._still_encoder.wait()
File "/home/pi/picam/local/lib/python2.7/site-packages/picamera/init.py", line 339, in wait
raise self.exception
picamera.PiCameraError: Unable to return a buffer to the encoder port: Argument is invalid

Process finished with exit code 1

(Thanks for creating this module!)

Add ability to determine recording frame number

Add a frame counter to the PiVideoEncoder class' _callback_write method (in picamera/encoders.py) and a property to the PiCamera class (in picamera/camera.py) to query the frame-counter (should probably raise a runtime error if a recording is not in progress, in which case no video encoder instance will be present to query).

picamera error on Model A

Hi,

I'm getting an error when I try and execute some picamera code on a Model A. I've developed some software on a model b and it works fine. I have come to run this on a model a and I keep getting the same error when instantiating picamera.Camera().

mmal: mmal_vc_component_enable: failed to enable component: ENOSPC
Unexpected error -   Camera component couldn't be enabled: Out of resources (other than memory)
Traceback (most recent call last):
  File "/home/pi/dev/pelmetcam/pelmetcam.py", line 230, in 
    with picamera.PiCamera(False) as camera:
  File "/home/pi/dev/picamera/picamera/camera.py", line 239, in __init__
    self._init_camera()
  File "/home/pi/dev/picamera/picamera/camera.py", line 329, in _init_camera
    prefix="Camera component couldn't be enabled")
  File "/home/pi/dev/picamera/picamera/exc.py", line 102, in mmal_check
    }.get(status, "Unknown status error")))
picamera.exc.PiCameraError: Camera component couldn't be enabled: Out of resources (other than memory)

Its exactly the same setup, the same sd card, everything. I've rebuilt the sd card from a new raspbian image, always with the same result, its runs fine on my model b, but I execute it on model a and it fails. I've done an apt-get update, upgrade and rpi-update but with no joy.

The model b is a 256Mb Rev 1, while the model a is a 256Mb Rev 2. Other than the rev difference, they are virtually identical.

Im not entirely convinced this is an error with picamera specifically but the error is being generated from picamera.Camera() so I though I'd see if you had any ideas! I have run a couple of the picamera test programs on the model a and they are fine, but when run within my code it always fails.

Any ideas?

Thanks

Martin

Video tests

Still need to figure out a way to test the output of video recordings ... defer to ffmpeg / libav? Is there a python interface for these or do we need to resort to screen-scraping?

Error with photos fading to black

Hi, I love the ease of use code. I have an issue though, that when my code takes between 33 and 93 photos before they start going dark. See the two screenshots listed below.

Example 1:
example2

Example 2:
example1

if it helps, I have seen other people discuss this kind of issue when using the raspistill software: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=45783&sid=3721788cbae337d91e3036ccd25ec53e&start=100

Essentially, they thought it was to do with using the raspistill '-n' switch and that if left out, it would work ok. JamesSH said this has been fixed in the latest version of his code.

Here is the code I am using to take the photos:

 #create controller
gpsc = GpsController()

#start controller
gpsc.start()

# finish off building the master filename
FILENAME = FILENAME+datetime.datetime.now().strftime("%Y-%m-%d_%H%M_")
with picamera.PiCamera() as camera:
  # Setup variables for PiCamera
  camera.exif_tags['IFD0.Copyright'] = 'insert copyright here '
  camera.exif_tags['IFD0.ImageDescription'] = 'insert description here'
  camera.exif_tags['GPS.GPSDOP'] = DOP
  camera.exif_tags['GPS.GPSAltitude'] = ALT
  camera.exif_tags['GPS.GPSMeasureMode'] = str(GPSMODE)
  camera.exif_tags['GPS.GPSLatitudeRef'] = exif_lat_ref(GPSLAT)
  camera.exif_tags['GPS.GPSLatitude'] = LAT
  camera.exif_tags['GPS.GPSLongitudeRef'] = exif_long_ref(GPSLONG)
  camera.exif_tags['GPS.GPSLongitude'] = LONG
  camera.resolution = (photo_width, photo_height)

  sound_buzzer(beep_short)
  for i, filename in enumerate(camera.capture_continuous(FILENAME+'{counter:05d}.jpg')):
     print(filename)
     time.sleep(interval_time)
     sound_buzzer(beep_short)
     if i == num_photos-1:
        break

#stop controller
gpsc.stopController()

Just for other people wondering what's going on in the above code. I am reading an Adafruit Ultimate GPS (code missing), and sounding a buzzer just before every photo is being taken. I start and pass my code the number of photos to be taken (num_photos) and the interval between photos (interval_time).

I am using version: 0.5 of the Picamera code.

Thanks

Add ability to determine presentation timestamp

As mentioned in the comments for #34, it should be easy to query the pts (presentation timestamp) and dts (decoder timestamp) properties from the buffer headers. At the moment, I'm not convinced we need dts (the comments in the buffer header indicate that this only deviates from pts when B-frames are involved, but as far as I know the H.264 encoder never outputs B-frames). However, pts should be easy to extract and make accessible to end users.

This brings up the question of design. I'm not convinced I named the frame property very well. Without looking at the docs it's no obvious whether it's referring to the frame image, an object representing the frame, a frame number, or something else. I could rename it frame_number but now that pts is getting added another possibility presents itself:

Change the frame property from returning a simple frame number to being something that returns a namedtuple which represents various properties about the current frame (number, pts, and maybe a keyframe property?).

This is a backwards incompatible change, but given we haven't made a release including the frame property yet I don't think that's a huge deal. And the only change for any users using it at the moment is from frame to, say, frame.index

GPIO.cleanup() in camera.close()

Hi,

I'm getting a problem as my calling program uses the GPIO, but Camera.close() calls GPIO.cleanup() which resets the GPIO state and make my program crash.

As an example, I want to use a button to start the camera, the code for my button exists in my calling program, I run Camera and stop it when I press the button, the next time I want to use the GPIO I cant because camera.close() has called cleanup(), resetting GPIO.

A possible solution could be to pass an optional useLed (which could default to True) on init of the PiCamera class? I have made the change in my fork of picamera. https://github.com/martinohanlon/picamera/blob/master/picamera/camera.py

To replicate the issue, you can run the following:

import picamera
import RPi.GPIO as GPIO

set gpio mode

GPIO.setmode(GPIO.BCM)

setup gpio pin as output

GPIO.setup(17, GPIO.OUT)

turn gpio on

GPIO.output(17, True)

with picamera.PiCamera() as camera:
camera.resolution = (800, 600)
camera.framerate = 25
camera.start_recording("my_video.h264")

camera.wait_recording(1)

camera.stop_recording()
camera.close()

turn gpio off - error occures here!

GPIO.output(17, False)

camera start_recording() to process.stdout crashes on stop_recording()

...and the exception raised is not defined, Bug inception! ;)

    cam.start_recording(process.stdin)
    time.sleep(5)
    cam.stop_recording()
Traceback (most recent call last):
    cam.stop_recording()
  File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py", line 571, in stop_recording
    self.wait_recording(0)
  File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py", line 558, in wait_recording
    self._video_encoder.wait(timeout)
  File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 285, in wait
    raise self.exception
NameError: global name 'PiCameraError' is not defined

The data is written to process.stdin correctly but the script crashes anyway.

Add record_sequence

To mirror capture_continuous we should have a record_continuous. Similar calling convention (filename_or_obj for first parameter and filenames are format() based substitution templates), but first iteration of loop calls start_recording, subsequent iterations call split_recording, and terminating the iterator calls stop_recording (need to check the last bit always works under all circumstances).

Video capture fails during motion detection example from recipes2.rst

Hello,

I'm trying to get the motion detection example at the bottom of recipes2.rst to work (out of the box - leaving in the random stub ;-) I've copied and pasted the code and all is fine until Motion is detected.

./test_vidcap.py
Motion detected!
Traceback (most recent call last):
File "./test_vidcap.py", line 55, in
camera.split_recording('after.h264')
File "/home/pi/picamera/picamera/camera.py", line 736, in split_recording
self._video_encoder.split(output)
File "/home/pi/picamera/picamera/encoders.py", line 533, in split
raise PiCameraRuntimeError('Timed out waiting for an SPS header')
picamera.exc.PiCameraRuntimeError: Timed out waiting for an SPS header

pi@raspberrypi ~/picamera $ head -3 picamera.egg-info/PKG-INFO
Metadata-Version: 1.1
Name: picamera
Version: 1.1

I've tried dropping the resolution line as I thought it might be related to #46 but no joy.

Investigate protection against multi-access lockup

A thread on the Pi Camera forum indicates it may be possible to guard against lockup of the camera when multiple processes attempt to access it:

While fumbling with vcdbg syms and vdbg dump 0xdec02020 (the address of cam_alloc_alloc_count), I found that when the camera is not being used, the value of cam_alloc_alloc_count is 0, and when in use, it is 1.
Now to find an appropriate place to implement a test for this to prevent crashing the camera.

This should be added to PiCamera.__init__ but looks like it'll require a bit more header translation, hence it won't make it into 0.5, but should be added before 1.0.

Add python 3 installation instructions

There's a few brave users out there jumping in with Python 3, but the instructions don't make it clear that easy_install3 is required for installation in this case.

Add a preview_layer property

@Dishwishy has done something interesting with their fork of picamera: Dishwishy/picamera@dde849a. It seems as if it's necessary to lower the "layer" of the preview (an attribute I don't know anything about yet) to render things on top of it. Given this is an eventual goal of #16 we should add a preview_layer property to permit configuration at runtime.

Encoders assume a write method which returns bytes written

The encoder's _callback_write() method assumes as write method on the underlying file-like object which returns the number of bytes written. This is correct for everything in Python 3, and for all IO streams in Python 2 which are implemented by the "io" module. However for legacy Python 2 streams like cStringIO.StringIO, it's not true and not required by the file-like object interface specification in Python 2. The method should probably be made more intelligent to use tell() in the case that write doesn't return a value (although only in Python 2).

Refactoring

For 0.6 (if we do one) or 1.0, some refactoring is necessary. __init__.py has grown excessively large. The encoders should be split into their own module, and probably PiCamera should be moved into its own too (although it, and the exception classes, should remain accessible from the root package namespace for user friendliness). Need to investigate the dependencies to see how easily this can be done.

Network examples in recipes

It seems like off-loading processing to another machine is a popular theme of camera projects. Consequently, the recipes chapter ought to include a couple of samples of streaming both images and video across the network to another machine.

Unable to see the preview throw the network !

I'm trying to use the camera board on raspberryPi and your modules to send the image throw the network.
I used yours examples (4.6 Capturing to a network stream), using my pc as Server and the raspi as client and when I starts the scripts the terminal say to me :

Image is 640x480
Image is verified

but I don't see the images
so I don't know why this line doesn't work !:

image = Image.open(image_stream)

What Do I have to Do ?
Thanks very much
Filippo

led property fails with RPIO library

When the RPIO library is installed instead of RPi.GPIO, the picamera library fails to initialize with a InvalidChannelException when GPIO.setup is called. It appears that while RPi.GPIO supports pin 5, RPIO doesn't. A forum post from jbeale points to a possible solution with pin 24. Need to test whether this will also work with RPi.GPIO or whether the two libraries will require different calls.

Error on start_recording

Whenever I try and call start_recording I get an the error (see below), is this known? Standard raspivid and raspistill work fine.

import picamera

with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.start_recording('my_video.h264')
camera.wait_recording(60)
camera.stop_recording()

mmal: mmal_vc_port_parameter_set: failed to set port parameter 32:2:ENOSYS
Traceback (most recent call last):
File "test2.py", line 5, in
camera.start_recording('my_video.h264')
File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py", line 689, in start_recording
self._video_encoder = PiVideoEncoder(self, enc_port, format, **options)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 306, in init
super(PiVideoEncoder, self).init(parent, port, format, *_options)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 96, in init
self._create_encoder(format, *_options)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 397, in _create_encoder
prefix="Unable to set inline_headers")
File "/usr/local/lib/python2.7/dist-packages/picamera/exc.py", line 102, in mmal_check
}.get(status, "Unknown status error")))
picamera.exc.PiCameraError: Unable to set inline_headers: Function not implemented

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.