Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 220 articles
Browse latest View live

Raspberry Pi 2 and the Kinect, Making a Hand Held Scanner

$
0
0

This project is off to a great start and something I think we should keep an eye on...

Proof of concept 3D Scanner with Kinect and Raspberry Pi2

I am working on a proof of concept standalone mobile 3D Scanner. Hopefully it will be possible to use a Raspberry Pi2 for this project. I have already posted a video on youtube. And some people asked for short instructions on how to run the Kinect on the Raspberry Pi2. And here it comes….

First i printed and modified a Kinect handle i found at Thingiverse. I remixed this handle and added a Raspberry Pi2 and a display mount to it.  You can find the files at: http://www.thingiverse.com/thing:698577

IMG_2821_preview_featured

You  can get the Raspberry Pi display from watterott.com, instructions for installing the display can be found on GitHub. I recommend to use the current Raspberry Pi Image which you can also find on GitHub.

IMG_2823_preview_featured

Start with the clean display Image. I used the libfreenect for some experiments. It seems that libfreenect provides all the functionality which is provided by the Kinect.  Lets start!

First of all we need to install all required libs. We start with an update of the packaged list.

...

Project Information URL: http://www.mariolukas.de/2015/04/proof-of-concept-3d-scanner-with-kinect-and-raspberry-pi2/





Kinect Helps Detect PTSD

$
0
0

As a cold war army veteran and having a son that deployed to Afghanistan this post hit close to home....

Kinect helps detect PTSD in combat soldiers

...

According to the U.S. Department of Veterans Affairs, PTSD affects 11 to 20 percent of veterans who have served in the most recent conflicts in Afghanistan and Iraq. It’s no wonder, then, that DARPA (the Defense Advanced Research Projects Agency, a part of the U.S. Department of Defense), wants to detect signs of PTSD in soldiers, in order to provide treatment as soon as possible.

One promising DARPA-funded PTSD project that has garnered substantial attention is SimSensei, a system that can detect the symptoms of PTSD while soldiers speak with a computer-generated “virtual human.” SimSensei is based on the premise that a person’s nonverbal communications—things like facial expressions, posture, gestures and speech patterns (as opposed to speech content)—are as important as what he or she says verbally in revealing signs of anxiety, stress and depression.

The Kinect sensor plays a prominent role in SimSensei by tracking the soldier’s body and posture. So, when the on-screen virtual human (researchers have named her Ellie, by the way) asks the soldier how he is feeling, the Kinect sensor tracks his overall movement and changes in posture during his reply. These nonverbal signs can reveal stress and anxiety, even if the soldier’s verbal response is “I feel fine.”

SimSensei interviews take place in a small, private room, with the subject sitting opposite the computer monitor. The Kinect sensor and other tracking devices are carefully arranged to capture all the nonverbal input. Ellie, who has been programmed with a friendly, nonjudgmental persona, asks questions in a quiet, even-tempered voice. The interview begins with fairly routine, nonthreatening queries, such as “Where are you from?” and then proceeds to more existential questions, like “When was the last time you were really happy?” Replies yield a host of verbal and nonverbal data, all of which is processed algorithmically to determine if the subject is showing the anxiety, stress and flat affect that can be signs of PTSD. If the system picks up such signals, Ellie has been programmed to ask follow-up questions that help determine if the subject needs to be seen by a human therapist.

...

Giota Stratou, one of ICT’s key programmers of SimSensei, provided details on the role of the Kinect sensor. “We used the original Kinect sensor and SDKs 1.6 and 1.7, particularly to track the points and angles of rotation of skeletal joints, from which we constructed skeleton-based features for nonverbal behavior. We included in our analysis features encoded from the skeleton focusing on head movement, hand movement and position, and we studied overall value by integrating in our distress predictor models.”

...

Key links

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/07/01/kinect-helps-detect-ptsd-in-combat-soldiers.aspx




Building the NUI Future...

$
0
0

Today's presentation is from Vincent Guigui who talks about the future of the NUI.

Building the Future of User Experience

from Vimeo.

Kinect, Oculus, Holograms, Wearables, Smart Objects... Over the past few years, we have seen a rise of the new devices and sensors coming to our everyday life.

This session will explain the principles of interfaces, what is innovation and how to use these new devices to create more natural and more personal computing experiences by blurring the line between our world and the digital one.

Project Information URL: https://vimeo.com/131932860, http://fr.slideshare.net/gcrao78/ncraftsio-2015-future-of-user-experiences 




Researching with HoloLens (and be awarded Dev Kits and Cash)

$
0
0

You're a researcher. You've seen HoloLens. You've got this great idea how you can use the HoloLens to change the world. You just need a little help... Say maybe Two HoloLens Dev kits and $100,000...

Academic Research Request for Proposals

Microsoft believes that mixed reality can be used to create new experiences that will contribute to advances in productivity, collaboration, and innovation. We engage with researchers across many disciplines to push boundaries in the state of the art at the intersection of software and hardware.

Microsoft HoloLens goes beyond augmented reality and virtual reality by enabling you to interact with three-dimensional holograms blended with your real world. Microsoft HoloLens is more than a simple heads-up display, and its transparency means you never lose sight of the world around you. High-definition holograms integrated with your real world will unlock all-new ways to create, communicate, work, and play.

On this page

Goals

The primary goal of this request for proposals (RFP) is to better understand the role and possible applications for holographic computing in society. Additional goals are to stimulate and advance academic research in mixed reality and encourage applications of holograms for novel purposes.

Proposals are invited from, but not limited to, the following areas of interest:

  • Data visualization
    • Example: Using mixed reality to make large data sets easier to navigate and understand
  • Evolution of pedagogy in STEM, medical, and design education
    • Example: Using existing 3D assets or new 3D assets for high-value training (e.g., interactive 3D models for medical training)
  • Future of communication and distributed collaboration
    • Examples: Remote training and support, first-responder emergency management, and virtual conferences
  • Interactive art and experimental media
    • Examples: Narrative storytelling, new forms of artistic expression, interactive journalism
  • Psychology-related topics
    • Examples: Human perception and human-computer interaction
  • Solving difficult problems and contributing new insights that are specific to the applicant’s field

Monetary and hardware awards

  • Microsoft anticipates making approximately five (5) awards consisting of US$100,000 and two Microsoft HoloLens development kits each. All awards are in the form of unrestricted gifts, which are delivered directly to the universities for the purpose of funding the winning proposals.
  • The awards are intended to be used for seed-funding larger initiatives, proofs of concept, or demonstrations of feasibility. It is important to understand that funding is not expected to continue after the first year and that PIs who are granted the Microsoft HoloLens Research Awards should therefore make every effort to use the award as one component of a diverse funding base in a larger or longer-running project. Proposals with a clear plan to secure co-funding are encouraged.

...

Submission process

Proposals must be written in English and submitted through the online application tool (https://cmt.research.microsoft.com/HoloLensRFP) no later than 11:30 P.M. (Pacific Daylight Time) on September 5, 2015.

...

Project Information URL: http://research.microsoft.com/en-us/projects/hololens/default.aspx




Mousing around with the Kinect v2

$
0
0

Friend of the Gallery and newly minted Microsoft Kinect MVP (Congrats!) Tango Chen is back with an extremely common request, using the Kinect to control a mouse (with source!)

Kinect v2 Mouse Control w/ Source Code

A mouse control application for Kinect v2, including a couple of options for various uses.

I’m so glad that I’ve just become a Microsoft Kinect MVP since this July, so I think I need to do more things.

One request asked most since the original Kinect came out is, “can I use my hand to control the mouse cursor?” There you can find a few applications for doing this with original Kinect on the web.

These days, some people still asked me for this kind of application for Kinect v2. Then I strangely found there’s no such downloadable applications on the web. I gotta release a Kinect v2 version app. And here’s it, with source code!


Options:

  • Mouse Sensitivity
  • Pause-To-Click Time Required

    The time you hold your hand for as a click

  • Pause Movement Thresold

    How large the circle range you hold your hand inside for a little while would be a click

  • Cursor Smoothing

    The large it is, the smoother the cursor would move, more slowly as well.

  • Grip Gesture
    Grip to drag/click
  • Pause To Click
    Hold your hand and don’t move for a little while to click.
  • No clicks, move cursor only

Project Information URL: http://tangochen.com/blog/?p=2137

Project Download URL: Kinect V2 Mouse Control - EXE

Project Source URL: https://github.com/TangoChen/KinectV2MouseControl

Contact Information:

Related past posts you might find interesting;




Kinect Studio Revisited

$
0
0

We've highlighted the Kinect Studio a number of times...

...but it's been over a year since our last post on it, so this post from the Kinect for Windows Team is nice and timely.

Kinect Studio lets you code on the go

...

Luckily for Anup, Kinect Studio makes coding for Kinect for Windows applications a lot easier to pack into a crowded day. Kinect Studio, which is included in the free Kinect for Windows SDK 2.0, allows a developer to record all the data that’s coming into an application through a Kinect sensor. This means that you can capture the data on a series of body movements or hand gestures and then use that data over and over again to debug or enhance your code. Instead of being moored to a Kinect for Windows setup and having to repeatedly act out the user scenario, you have a faithful record of the color image, the depth data, and the three-dimensional relationships. With this data uploaded to your handy laptop, you can squeeze in a little—or a lot—of Kinect coding whenever time permits.

Let’s take a quick look at the main features of Kinect Studio. As shown below, it features four windows: a color viewer, a depth viewer, a 3D viewer, and a control window that lets you record and playback the captured image.

The four windows in Kinect Studio, clockwise from top: control window, color viewer, depth viewer, and 3D viewer

The color viewer shows exactly what you’d expect: a faithful color image of the user scenario. The depth viewer shows the distance of people and objects in the scene using color: near objects appear red; distant ones are blue; and objects in-between show up in various shades of orange, yellow, and green. The 3D viewer gives you a three-dimensional wire-frame model of the scene, which you can rotate to explore from different perspectives.

The control, or main, window in Kinect Studio is what brings all the magic together. Here’s where you find the controls to record, save, and play back the captured scenario. You can stop and start the recording by moving the cursor along a timeline, and you can select and save sections.

Once you’ve recorded the user scenario and saved it to your laptop in Kinect Studio, you can play it over and over while you modify the code. The developers at Ubi, for instance, employ Kinect Studio to record usability sessions, during which end users act out various scenarios employing Ubi software. They can replay, stop, and start the scenarios frame by frame, to make sure their code is behaving exactly as they want. And since the recordings are accessible from a laptop, developers can test and modify their Kinect for Windows application code just about anywhere.

Using Kinect Studio to perform and analyze user experience studies

For Anup, it means that he can code during the bus ride home or in bed, after his children have gone to sleep. “Kinect Studio doesn’t actually increase the number of hours in the day,” he says, “but it sure feels like it.”

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/07/10/kinect-studio-lets-you-code-on-the-go.aspx




NextStage - Realtime Camera Tracking for Kinect

$
0
0

Today's commercial project shows off just how powerful the Kinect really can be...

NextStage

For the past year I’ve been developing an application called NextStage. NextStage turns the Kinect V2 into a realtime virtual production camera, by tracking retroreflective markers in a scene.

More information can be found at NextStagePro.com and in the video below:

This is full 6 degree of freedom tracking running in realtime. Compared to the 6 dof tracking in Kinect fusion, it does take more time to set up the markers. However it can track over flat surfaces, is less processor intensive and doesn’t require a powerful GPU like fusion, can handle fast motion, dynamic objects in the scene, and doesn’t have the same drift errors that fusion can have.

I know people don't normally post their applications on this form, but I think there are some features relevant to Kinect developers and enthusiast.

There are two versions of NextStage, and NextStage Pro can stream the tracking data out to other applications using the OSC framework. This stream includes the Kinect’s position in meters, quaternion rotation, euler rotation, and the Kinect timestamp. Since multiple applications can access the Kinect at once, you can run NextStage in the background and stream the data out to your own Kinect project.

The marker sets that NextStage uses to track markers can also be shared between installations of NextStage. This can be used to very quickly calculate the difference in position and rotations between multiple Kinects.

I’ve been developing this application pretty much in a vacuum, but I’m very excited to finally get it out into the world. Please let me know if you have any questions, comments or concerns.

---

- Realtime Camera Tracking

When combined with infrared or retroreflective markers, NextStage is capable of instantly and accurately tracking position and rotation in 3D space.

- Instant Matchmoving

6DOF tracking lets users easily combine live action footage with virtual objects and sets, without the need for tedious frame-by-frame post processing.

- Depth-based Keying

Separate live action subjects from the background in realtime. Depth mattes let users place live action people or subjects on a virtual set without the need for green screen.

- Creative Effects

Depth mattes can be used as an instant, high quality garbage matte for green screen footage, or to quickly rotoscope actors and objects.

- HD Capture

Capture uncompressed RGBA footage in 720p with NextStage Lite, or sync tracking data to an external camera with NextStage Pro.

- Flexible Workflows

NextStage Pro lets users export 30hz tracking data to sync external cameras and devices at 24, 25 and 30 frames per second.

Project Information URL: https://social.msdn.microsoft.com/Forums/en-US/5b3ce727-7289-4a4c-a745-b635d157e9bc/nextstage-pro-realtime-camera-tracking-for-kinect?forum=kinectv2sdk, http://nextstagepro.com/




"Anatomy for Sculptors"

$
0
0

Today's post isn't really Kinect related, but is augmented reality and just kind of cool and not something I run across often...

New 3D Augmented Reality Book - Anatomy for Sculptors

This book with 3D model images will help painters, sculptors, illustrators and CG artists to develop sculptures, paintings or digital images.

Head & Neck Anatomy is the latest book of Anatomy for Sculptors. The thing that makes this book different than other medical books is the use of augmented reality. This advanced technology has been used in the book to provide readers with 3D images of the head and neck.

By integrating 3D imagery in the book, the readers will be able to understand it in a better way.

Project Information URL: http://x-tech.am/new-3d-augmented-reality-book/

ANATOMY FOR SCULPTORS

image

PDF e-book

226 page most easy-to-use human anatomy guide for artists, explaining the human body in a simple manner. The book contains keys to figuring out construction in a direct, easy-to-follow, and highly visual manner. Art students, 3D sculptors and illustrators alike will find this manual a practical foundation upon which to build their knowledge of anatomy – an essential background for anyone wishing to draw or sculpt easily and with confidence!

Uldis Zarins the book presentation on uartsy.com - http://www.uartsy.com/program-info/anatomy-for-sculptors-free-webinar-replay-july-2014





Things to check when running Kinect for Windows apps on Windows 10

$
0
0

Friend of the Gallery, Abhijit Jana, recently posted a couple tips for those of you Kinect v2 Dev's moving to Windows 10.

Running Kinect for Windows applications on Windows 10 – Things you should verify

Running a Kinect v2 device and a Kinect for Windows application on Windows 10 is not difficult nor different than what we have seen in the earlier version of Windows Operating System.  You can run a Kinect for Windows application (either a desktop app, or a store app ) on Windows 10. However, incase you found that your device is not detected properly, application is not running, or not able to read the data from sensor, please verify following.

1.  Verify Device Settings

The very first things you need to verify, if your device is connected and loaded properly. 

Go To PC Settings –> Devices –> Connected Devices

...

2.  Verify Privacy Settings

This settings is required to verify only for “Store App”.  For Kinect for Windows Store App, we must select “Microphone” and “Video” Capability” in the app manifest file. This enables the apps to access the Camera and Microphone for the targeted device.

...

Points to be remember
  1. This Privacy Settings is only for Store Apps. Normal desktop app would work without this.
  2. Even if the App is not running, the PC settings would be available once the app is deployed with capability added. You can do the necessary change in settings and start to app as well.

Project Information URL: http://dailydotnettips.com/2015/08/01/running-kinect-for-windows-applications-on-windows-10-things-you-should-verify/

Contact Information:




Kinect'ing with Gregory Kramida, University of Maryland

$
0
0

Shahed Chowdhuri recently posted a great interview, something I don't see near often enough...

UMD Kinect Q&A: an interview with Gregory Kramida at the University of Maryland

We’re here with Gregory Kramida to talk about his Kinect group projects at the University of Maryland.

Gregory Kramida at UMD

Gregory Kramida at UMD

1. Greg, tell us a little bit about yourself and your team.

2. How did you get started with Kinect development?

3. What programming languages and libraries/utilities are you using?

4. What kind of challenges and limitations have you faced? How did you overcome them?

5. How many Kinect sensors are you using from a single application? Can you more details about your configuration/setup?

6. What are the practical applications of the work you’ve done so far? What is the future direction of your projects?

7. Do you have any advice for other Kinect developers out there?

[Click through to read the entire post, including the answers... ;) ]

Project Information URL: http://wakeupandcode.com/umd-kinect-qa/

Contact Information:




Kinect v2 Avateering

$
0
0

Peter Daukintis, Friend of the Gallery, posted another great example of using the Kinect v2, this time using it and its capabilities to start an Avatar journey...

Here's some of the other posts from Peter we've highlighted recently;

Avateering with Kinect V2 – Joint Orientations

For my own learning I wanted to understand the process of using the Kinect V2 to drive the real-time movement of a character made in 3D modelling software. This post is the first part of that learning which is taking the joint orientations data provided by the Kinect SDK and using that to position and rotate ‘bones’ which I will represent by rendering cubes since this is a very simple way to visualise the data. (I won’t cover smoothing the data or modelling/rigging in this post). So the result should be something similar to the Kinect Evolution Block Man demo which can be discovered using the Kinect SDK browser.

image

To follow this along you would need a working Kinect V2 sensor with USB adapter, a fairly high-specced machine running Windows 8.0/8.1 with USB3 and a DirectX11-compatible GPU and also the Kinect V2 SDK installed. Here are some instructions for setting up your environment. 

To back up a little there are two main ways to represent body data from the Kinect; the first being to use the absolute positions provided by the SDK which are values in 3D Camera-space which are measured in metres, the other is to use the joint orientation data to rotate a hierarchy of bones. The latter is the one we will look at here. Now, there is an advantage in using joint orientations and that is, as long as your model has the same overall skeleton structure as the Kinect data then it doesn’t matter so much what the relative sizes of the bones are which frees up the modelling constraints. The SDK has done the job of calculating the rotations from the absolute joint positions for us so let’s explore how we can apply those orientations in code.

Code

I am going to program this by starting with the DirectX and XAML C++ template in Visual Studio which provides a basic DirectX 11 environment, with XAML integration, basic shaders and a cube model described in code ...

Body Data

Let’s start by getting the body data into our program from the sensor. As always we start with getting a KinectSensor object which I will initialise in the Sample3DSceneRenderer class constructor, then we open a BodyFrameReader on the BodyFrameSource, for which there is a handy property on the KinectSensor object. ...

Kinect Joint Hierarchy

The first subject to consider is how the Kinect joint hierarchy is constructed as it is not made explicit in the SDK. Each joint is identified by one of the following enum values:...

Bones

To draw each separate bone I modified the original cube model that was supplied with the default project template. I modified the coordinates of the original cube so that one end was at the origin and the other was 4 units in the y-direction; so when rendered ...

...

...this shows the end result:

image

Project Information URL: http://peted.azurewebsites.net/avateering-with-kinect-v2-joint-orientations/

Project Source URL: https://github.com/peted70/kinectv2-avateer-jointorientations

Contact Information:




Computational Hydrographic Printing

$
0
0

Today inspirational project shows off another way the Kinect is being used in new, exciting and unanticipated ways...

Computational Hydrographic Printing (SIGGRAPH 2015)

image

image

Abstract:
Hydrographic printing is a well-known technique in industry for transferring color inks on a thin film to the surface of a manufactured 3D object. It enables high-quality coloring of object surfaces and works with a wide range of materials, but suffers from the inability to accurately register color texture to complex surface geometries. Thus, it is hardly usable by ordinary users with customized shapes and textures.

We present computational hydrographic printing, a new method that inherits the versatility of traditional hydrographic printing, while also enabling precise alignment of surface textures to possibly complex 3D surfaces. In particular, we propose the first computational model for simulating hydrographic printing process. This simulation enables us to compute a color image to feed into our hydrographic system for precise texture registration. We then build a physical hydrographic system upon off-the-shelf hardware, integrating virtual simulation, object calibration and controlled immersion. To overcome the difficulty of handling complex surfaces, we further extend our method to enable multiple immersions, each with a different object orientation, so the combined colors of individual immersions form a desired texture on the object surface. We validate the accuracy of our computational model through physical experiments, and demonstrate the efficacy and robustness of our system using a variety of objects with complex surface textures.

Project Information URL: http://www.cs.columbia.edu/~cxz/publications/hydrographics.pdf




Dark Olive Green Skin...

$
0
0

This is a great title for a post from Dwight Goins that I've been meaning to highlight for a while...

My Kinect told me I have Dark Olive Green Skin…

Did you know the Kinect for windows v2 has the ability to determine your Skin  pigmentation and your hair color? – Yes I’m telling you the truth. One of the many features of the Kinect device is the ability to read skin complexion and hair color of a person who is being tracked by the device.

If you ever need or require the ability to read the skin complexion of a person or determine the color of a persons hair on their head, this posting will show you how to do just that.

image

The steps are rather quick and simple. Determining the skin color requires you to access Kinect’s HD Face features.

Kinect has the ability to detect Facial features in 3-D. This is known as “HD Face”. It can detect depth, height, and width. The Kinect can also use it’s High Definition Camera, to detect colors such as the Red, Green, and Blue intensities that reflect back, and infer the actual skin tone of a tracked face. Along with the skin tone, the Kinect can also detect the Hair color on top of a person’s head…

So What’s Your Skin Tone? Click Here to download the source code and try it out.

If you want to include this feature inside your application, the steps you must take are:

1. Create a new WPF or Windows 8.1 WPF application

2 Inside the new application, add a reference to the Microsoft.Kinect and Microsoft.Kinect.Face assemblies.

...

Once your application runs it should look similar to this (Minus the FrameStatus):

image

...

Project Information URL: https://dgoins.wordpress.com/2015/03/21/my-kinect-told-me-i-have-dark-olive-green-skin/

Contact Information:




Resolving "Kinect Monitor (KinectMonitor) failed to start."

$
0
0

Today is a quick post from the one and only Bruno Capuano...

[#KINECTSDK] Error: Kinect Monitor (KinectMonitor) failed to start.

Today is (again) a quick post. I hope this one is my last error fixing of the year 2014. Todays issue is related to the installation process of Kinect SDK V2. If you were using old sdks, you’ll probably find this error message

Error code: 1920

Kinect Monitor (KinectMonitor) failed to start. Verify that you have sufficient privileges to start system services

image

So is time to check the log in temp folder. There is a message which suggest me that previous versions of the KinectSDK not deleted some files on the uninstall process. And that’s why, the current Installer had problems to deploy and register a new Kinect service .

...

Project Information URL: http://elbruno.com/2014/12/19/kinectsdk-error-kinect-monitor-kinectmonitor-failed-to-start-2/

Contact Information:




"Avatar Car Driving with Microsoft Kinect V2"


KinemotoSDK (Kinect v2 Web Player)

$
0
0

Today's commercial product was just recently announced and is something pretty new and interesting...

KinemotoSDK just released! (Kinect v2 in the browser)

We're proud to announce the release of our first product that is now also available at the Unity Asset Store: https://www.assetstore.unity3d.com/en/#!/content/35136

We use the SDK to create all our own Kinect v2.0 games and think it can benefit the dev community. The $30 fee we ask for the package allows us to maintain and support the SDK. The USP of this SDK, in combination with the Kinemoto server (Windows 8.1 only), is the fact that it allows you to run Unity apps within a browser and still use the different Kinect streams.

image

We created a dedicated developer page that contains several tutorial video's to get you started. Have a look at http://developer.kinemoto.com.

If your interested in this SDK and want to check it out, let us know. We give away vouchers to non-profit organizations or people that help us improve the product by providing valuable feedback ...

Project Information URL: https://social.msdn.microsoft.com/Forums/en-US/5704cb38-d063-48d8-b354-b782835b59f0/kinemotosdk-just-released-kinect-v2-in-the-browser?forum=kinectv2sdk

KinemotoSDK (Kinect Web Player)

his KinemotoSDK enables developers to use Kinect enabled Unity apps/games in the Unity Web Player. With the KinemotoSDK developers can easily add Kinect streams to their app/game, make use of Kinemoto functions and methods and build for Unity Web Player and Standalone.

Developers only need to download and install the KinemotoServer and voila!

image

The currently available streams are: Body, BodyIndex and Color. More streams will be added over time.
Future releases also include WebGL and Android support.

Getting started

In order to work with the KinemotoSDK, you need to download and install the SDK, Kinect drivers and KinemotoServer first.

Have a look at our video tutorials!

image

Project Information URL: http://developer.kinemoto.com




Kinect to HD Face

$
0
0

Friend of the Gallery and Kinect MVP Vangos Pterneas is back with a great and detailed post on developing with the HD Face API.

Some of the other times we've highlighted Vangos Pterneas's work;

How to use Kinect HD Face

image

Throughout my previous article, I demonstrated how you can access the 2D positions of the eyes, nose, and mouth, using Microsoft’s Kinect Face API. The Face API provides us with some basic, yet impressive, functionality: we can detect the X and Y coordinates of 4 eye points and identify a few facial expressions using just a few lines of C# code. This is pretty cool for basic applications, like Augmented Reality games, but what if you need more advanced functionality from your app?

Recently, we decided to extend our Kinetisense project with advanced facial capabilities. More specifically, we needed to access more facial points, including lips, jaw and cheeks. Moreover, we needed the X, Y and Z position of each point in the 3D space. Kinect Face API could not help us, since it was very limited for our scope of work.

Thankfully, Microsoft has implemented a second Face API within the latest Kinect SDK v2. This API is called HD Face and is designed to blow your mind!

At the time of writing, HD Face is the most advanced face tracking library out there. Not only does it detect the human face, but it also allows you to access over 1,000 facial points in the 3D space. Real-time. Within a few milliseconds. Not convinced? I developed a basic program that displays all of these points. Creepy, huh?!

In this article, I am going to show you how to access all these points and display them on a canvas. I’ll also show you how to use Kinect HD Face efficiently and get the most out of it.

Prerequisites

Source Code

Tutorial

Although Kinect HD Face is truly powerful, you’ll notice that it’s badly documented, too. Insufficient documentation makes it hard to understand what’s going on inside the API. Actually, this is because HD Face is supposed to provide advanced, low-level functionality. It gives us access to raw facial data. We, the developers, are responsible to properly interpret the data and use them in our applications. Let me guide you through the whole process.

Step 1: Create a new project

Let’s start by creating a new project. Launch Visual Studio and select File -> New Project. Select C# as you programming language and choose either the WPF or the Windows Store app template. Give your project a name and start coding.

...

But wait!

OK, we drew the points on screen. So what? Is there a way to actually understand what each point is? How can we identify where they eyes are? How can we detect the jaw? The API has no built-in mechanism to get a human-friendly representation of the face data. We need to handle over 1,000 points in the 3D space manually!

Don’t worry, though. Each one of the vertices has a specific index number. Knowing the index number, you can easily deduce where does it correspond to. For example, the vertex numbers 1086, 820, 824, 840, 847, 850, 807, 782, and 755 belong to the left eyebrow.

Similarly, you can find accurate semantics for every point. Just play with the API, experiment with its capabilities and build your own next-gen facial applications!

If you wish, you can use the Color, Depth, or Infrared bitmap generator and display the camera view behind the face. Keep in mind that simultaneous bitmap and face rendering may cause performance issues in your application. So, handle with care and do not over-use your resources.

image

Project Information URL: http://pterneas.com/2015/06/06/kinect-hd-face/

Project Source URL: https://github.com/Vangos/kinect-2-face-hd

Contact Information:




Finger Tracking with Metrilus Aiolos Finger Tracking Library

$
0
0

Today's library is one I've seen asked for a number of times on different forums and comments. Best of all, you can get it free and help them flesh it out...

Metrilus Aiolos Finger Tracking

We are excited to share our Finger Tracking library Aiolos for Kinect v2 with you. At this time, Aiolos is still in an experimental stage. Feel free to play with it, but don’t expect it to be perfect, yet. To improve #Aiolos we are interested in your feedback! What do you use it for? How would you like to use it? Please also tell us if you find bugs. This is especially important for us to further develop Aiolos.

Features

  • 2-D position of finger tip-, middle- and root joint
  • 2-D contour points of the hand
  • 3-D position of finger tip-, middle- and root joint
  • 3-D contour points of the hand
  • finger labeling (experimental)

image

Usage

Aiolos for Kinect v2 works side by side with the Kinect SDK. Get the infrared and depth images, put them into Aiolos, and get three 3D points for each finger. The download also includes a small sample program.

Project Information URL: http://www.metrilus.de/blog/portfolio-items/aiolos/




Unity Asset - Kinect [v1] with MS-SDK

$
0
0

Last week I was a little taken to task for not covering the many Kinect assets in the Unity Asset Store. Sure I've blogged about a few, but I'd never actually searched the Store for Kinect assets. I know, "Bad Greg..."

image

I have to thank Rumen Filkov (aka RF Solutions) for pointing this out. Rumen has a number of assets there in the store, free and paid, which I'll be covering in the coming week to make up for missing this great resource... :)

The first is Kinect v1 asset. Sure, the Kinect v1 has been out for years and been superseded by the Kinect v2, but there are still a good number of V1's out there...

Kinect with MS-SDK

image

This is a set of Kinect v1 examples that uses several major scripts, grouped in one folder. It demonstrates how to use Kinect-controlled avatars, Kinect-detected gestures or other Kinect-related stuff in your own Unity projects. This asset uses the Kinect SDK/Runtime provided by Microsoft. For more Kinect v1-related examples, utilizing Kinect Interaction, Kinect Speech Recognition, Face Tracking or Background Removal, see the KinectExtras package. These two packages work with Kinect v1 only and can be used with both Unity Pro and Unity Free editors.

Project Download URL: https://www.assetstore.unity3d.com/en/#!/content/7747 

Kinect with MS-SDK

...

How to Run the Example:
1. Download and install Kinect SDK 1.8 or Kinect Runtime 1.8 as explained in Readme-Kinect-MsSdk.pdf, located in Assets-folder.
2. Download and import the package.
3. Open and run scene KinectAvatarsDemo, located in Assets/AvatarsDemo-folder.
4. Open and run scene KinectGesturesDemo, located in Assets/GesturesDemo-folder.
5. Open and run scene KinectOverlayDemo, located in Assets/OverlayDemo-folder.

Download:
The official release of ‘Kinect with MS-SDK’-package is available in the Unity Asset Store.
The project’s Git-repository is public and is located here. This repository is private and its access is limited to contributors and donators only.

Troubleshooting:
* If you need integration with the KinectExtras, see ‘How to Integrate KinectExtras with the KinectManager’-section here.
* If you get DllNotFoundException, make sure you have installed the Kinect SDK 1.8 or Kinect Runtime 1.8.
* Kinect SDK 1.8 and tools (Windows-only) can be found here.
* The example was tested with Kinect SDK 1.5, 1.6, 1.7 and 1.8.
* Here is a link to the project’s Unity forum: http://forum.unity3d.com/threads/218033-Kinect-with-MS-SDK

What’s New in Version 1.11:
1. Added max-user-distance setting to KinectManager, to allow max-distance limitation.
2. Added maps-width-percent setting to KinectManager, to allow specifying of depth & color maps width as percent of the game-window width.
3. Added colliders to the avatars in KinectAvatarsDemo-scene.
4. Updated KinectOverlayDemo-scene to use full-screen background.
5. Updated calls to the KinectExtras-functions, in order to sync them to the latest Extras’ version.
6. Fixed Playmaker-Kinect actions.
7. Converted package to Unity v.4.5.

Playmaker Actions for ‘Kinect with MS-SDK’ and ‘KinectExtras with MsSDK':
And here is “one more thing”: A great Unity-package for designers and developers using Playmaker, created by my friend Jonathan O’Duffy from HitLab Australia and his team of talented students. It contains many ready-to-use Playmaker actions for Kinect and a lot of example scenes. The package integrates seamlessly with ‘Kinect with MS-SDK’ and ‘KinectExtras with MsSDK’-packages. I can only recommend it!

...

Project Information URL: http://rfilkov.com/2013/12/16/kinect-with-ms-sdk/




Kinect 2 Computer Vision

$
0
0

Kinect MVP James Ashley is back with a great example of using OpenCV v3 (which we highlighted OpenCV turns 3 and seeing Intel(R) INDE OpenCV), Emgu and the Kinect v2 to implement computer vision/facial recognition.

Some of our other posts where we highlight James;

Emgu, Kinect and Computer Vision

image

Last week saw the announcement of the long awaited OpenCV 3.0 release, the open source computer vision library originally developed by Intel that allows hackers and artists to analyze images in fun, fascinating and sometimes useful ways. It is an amazing library when combined with a sophisticated camera like the Kinect 2.0 sensor. The one downside is that you typically need to know how to work in C++ to make it work for you.

This is where EmguCV comes in. Emgu is a .NET wrapper library for OpenCV that allows you to use some of the power of OpenCV on .NET platforms like WPF and WinForms. Furthermore, all it takes to make it work with the Kinect is a few conversion functions that I will show you in the post.

Emgu gotchas

The first trick is just doing all the correct things to get Emgu working for you. Because it is a wrapper around C++ classes, there are some not so straightforward things you need to remember to do.

1. First of all, Emgu downloads as an executable that extracts all its files to your C: drive. This is actually convenient since it makes sharing code and writing instructions immensely easier.

2. Any CPU isn’t going to cut it when setting up your project. You will need to specify your target CPU architecture since C++ isn’t as flexible about this as .NET is. Also, remember where your project’s executable is being compiled to. For instance, an x64 debug build gets compiled to the folder bin/x64/Debug, etc.

3. You need to grab the correct OpenCV C++ library files and drop them in the appropriate target project file for your project. Basically, when you run a program using Emgu, your executable expects to find the OpenCV libraries in its root directory. There are lots of ways to do this such as setting up pre-compile directives to copy the necessary files. The easiest way, though, is to just go to the right folder, e.g. C:\Emgu\emgucv-windows-universal-cuda 2.4.10.1940\bin\x64, copy everything in there and paste it into the correct project folder, e.g. bin/x64/Debug. If you do a straightforward copy/paste, just remember not to Clean your project or Rebuild your project since either action will delete all the content from the target folder.

4. Last step is the easiest. Reference the necessary Emgu libraries. The two base ones are Emgu.CV.dll and Emgu.Util.dll. I like to copy these files into a project subdirectory called libs and use relative paths for referencing the dlls, but you probably have your own preferred way, too.

WPF and Kinect SDK 2.0

I’m going to show you how to work with Emgu and Kinect in a WPF project. The main difficulty is simply converting between image types that Kinect knows and image types that are native to Emgu. I like to do these conversions using extension methods. I provided these extensions in my first book Beginning Kinect Programming about the Kinect 1 and will basically just be stealing from myself here.

I assume you already know the basics of setting up a simple Kinect program in WPF. In MainWindow.xaml, just add an image to the root grid and call it rgb:

...

image

You should now be able to plug in any of the sample code provided with Emgu to get some cool CV going. As an example, in the code below I use the Haarcascade algorithms to identify heads and eyes in the Kinect video stream. I’m sampling the data every 10 frames because the Kinect is sending 30 frames a second while the Haarcascade code can take as long as 80ms to process. Here’s what the code would look like:

...

Project Information URL: http://www.imaginativeuniversal.com/blog/post/2015/06/11/Emgu-and-Kinect-and-Computer-Vision.aspx

Contact Information:




Viewing all 220 articles
Browse latest View live