pranjal le 28

23
7/27/2019 Pranjal LE 28 http://slidepdf.com/reader/full/pranjal-le-28 1/23

Upload: rupeshuke

Post on 14-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 1/23

Page 2: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 2/23

ABSTRACT:

Visualizable objects in biology and medicine extend across a vast range of scale, from

individual molecules and cells through the varieties of tissue and interstitial interfaces to

complete organs, organ systems, and body parts. The practice of medicine and study of  biology have always relied on visualizations to study the relationship of anatomic

structure to biologic function and to detect and treat disease and trauma that disturb or 

threaten normal life processes. Traditionally, these visualizations have been either direct,via surgery or biopsy, or indirect, requiring extensive mental reconstruction. The

 potential for revolutionary innovation in the practice of medicine and in biologic

investigations lies in direct, fully immersive, real-time multi sensory fusion of real and

virtual information data streams into online, real-time visualizations available during

actual clinical procedures or biological experiments. In the field of scientificvisualization, the term "four dimensional visualization" usually refers to the process of 

rendering a three dimensional field of scalar values. "4D" is shorthand for "four-dimensional"- the fourth dimension being time. 4D visualization takes three-dimensional

images and adds the element of time to the process. The revolutionary capabilities of new

three-dimensional (3-D) and four-dimensional (4-D) medical imaging modalities alongwith computer reconstruction and rendering of multidimensional medical and histologic

volume image data, obviate the need for physical dissection or abstract assembly of 

anatomy and provide powerful new opportunities for medical diagnosis and treatment, aswell as for biological investigations.In contrast to 3D imaging diagnostic processes, 4D

allows doctor to visualize internal anatomy moving in real-time. So physicians and

sonographers can detect or rule out any number of issues, from vascular anomalies andgenetic syndromes. Time will reveal the importance of 4d visualization.

Page 3: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 3/23

We feel a great honour in presenting this paper in the SPCTS-2005 in the

U.V.P.C.E. We especially thank the IEEE for organizing such a national level

symposia. This paper presentation competition has helped us in gaining

knowledge and has provided us with a deep insight in the biomedical field. This

paper has made us to take interest in all recent developments in the biomedical

field.

Page 4: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 4/23

INDEX:

SR NO CONTENT PAGE NO.

1 INTRODUCTION 1

2 3D- IMAGE GENERATION, DISPLAY

AND VISUALIZATION

3

3

CONCEPT OF 4D VISUALIZATION6

4 VOLOCITY- A RENDERING SYSTEM 9

5 4D VISUALIZATION IN LIVING

CELLS

12

6WORKSTATION FOR ACQUISITION,

RECONSTRUCTION AND

VISUALIZATION OF 4D IMAGES OF

HEART.

15

74D IMAGE WARPING FOR 

MEASUREMENT OF LONGITUDINAL

BRAIN CHANGES

17

8 MED-SANAREA: BRIGHT FUTURE 18

9 BIBLIOGRAPHY 19

Page 5: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 5/23

INTRODUCTION:

The practice of medicine and study of biology have always relied on visualizations to

study the relationship of anatomic structure to biologic function and to detect and treat

disease and trauma that disturb or threaten normal life processes. Traditionally, these

visualizations have been either direct, via surgery or biopsy, or indirect, requiringextensive mental reconstruction. The revolutionary capabilities of new three-dimensional

(3-D) and four-dimensional (4-D) medical imaging modalities [computed tomography

(CT), magnetic resonance imaging (MRI), positron emission tomography (PET),ultrasound (US), etc. along with computer reconstruction and rendering of 

multidimensional medical and histologic volume image data, obviate the need for 

 physical dissection or abstract assembly of anatomy and provide powerful newopportunities for medical diagnosis and treatment, as well as for biological investigations.

4D-THE MODERN DIMENSION:

"4D" is shorthand for "four-dimensional"- the fourth dimension being time. 4Dvisualization takes three-dimensional images and adds the element of time to the process.

In contrast to 3D imaging diagnostic processes, 4D allows doctor to visualize internal

anatomy moving in real-time. For example: Movement patterns of fetuses allowsconclusions to be drawn about their development; increase of accuracy in ultrasound

guided biopsies thanks to the visualization of needle movements in real time in all 3

 planes. So physicians and sonographers can detect or rule out any number of issues, fromvascular anomalies and genetic syndromes.

 

3D GIVES LIFE TO 4D :

Locked within 3-D biomedical images is significant information about the objects andtheir properties from which the images are derived. Efforts to unlock this information to

reveal answers to the mysteries of form and function are couched in the domain of image processing and visualization. A variety of both standard and sophisticated methods have

 been developed to process (modify) images to selectively enhance the visibility and

measurability of desired object features and properties. For example, both realism- preserving and perception-modulating approaches to image display have significantly

advanced the practical usefulness of 4-D biomedical imaging.

Many life-threatening diseases and/or quality-of-life afflictions still require physical

interventions into the body to reduce or remove disease or to alleviate harmful or painful

conditions. But minimally invasive or noninvasive interventions are now within reachthat effectively increase physician performance in arresting or curing disease; reduce risk,

 pain, complications, and reoccurrence for the patient; and decrease healthcare costs. What

is yet required is focused reduction of recent and continuing advances in visualization

technology to the level of practice, so that they can provide new tools and procedures that physicians ‘‘must have’’ to treat their patients and empower scientists in biomedical

studies of structure-to function relationships.

Page 6: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 6/23

Forming an image is mapping some property of an object onto image space. This space is

used to visualize the object and its properties and may be used to characterize

quantitatively its structure or function. Imaging science may be defined as the study of these mappings and the development of ways to better understand them, to improve them,

and to use them productively. The challenge of imaging science is to provide advanced

capabilities for acquisition, processing, visualization, and quantitative analysis of  biomedical images to increase substantially the faithful extraction of useful information

that they contain.

The particular challenge of imaging science in biomedical applications is to provide

realistic and faithful displays, interactive manipulation and simulation, and accurate,

reproducible measurements. The goal of visualization in biomedical computing is to

formulate and realize a rational basis and efficient architecture for productive use of  biomedical-image data. The need for new approaches to image visualization and analysis

will become increasingly important and pressing as improvements in technology enable

more image data of complex objects and processes to be acquired. The value of such

visualization technology in medicine will derive more from the enhancement of realexperience than from the simulation of reality. Visualizable objects in medicine extend

across a vast range of scale, from individual molecules and cells, through the varieties of tissue and interstitial interfaces, to complete organs, organ systems, and body parts, and

these objects include functional attributes of these systems, such as biophysical,

 biomechanical, and physiological properties. Medical applications include accurateanatomy and function mapping, enhanced diagnosis, and accurate treatment planning and

rehearsal. However, the greatest potential for revolutionary innovation in the practice of 

medicine lies in direct, fully immersive, real-time multisensory fusion of real and virtual

information data streams into an online, real-time visualization during an actual clinical procedure. Such capabilities are not yet available to the general practitioner. However,

current advanced computer image-processing research has recently facilitated major 

 progress toward fully interactive 3-D visualization and realistic simulation. Thecontinuing goals for development and acceptance of important visualization display

technology are (a) improvement in speed, quality, and dimensionality of the display

and (b) improved access to the data represented in the display throughinteractive,intuitive manipulation and measurement of the data represented by the

display.

Included in these objectives is determination of the quantitative information about the properties of anatomic tissues and their functions that relate to and are affected by

disease. With these advances in hand, the delivery of several important clinical

applications will soon be possible that will have a significant impact on medicine andstudy of biology.

Page 7: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 7/23

3D- IMAGE GENERATION, DISPLAY AND VISUALIZATION

Display and visualization are not fully synonymous. Visualization of 3-D biomedical-volume images has traditionally been divided into two different techniques:

1. surface rendering

2. volume rendering.

Both techniques produce a visualization of selected structures in the 3-D–volume image,

 but the methods involved in these techniques are quite different, and each has itsadvantages and disadvantages. Selection between these two approaches is often

 predicated on the particular nature of the biomedical-image data, the application to which

the visualization is being applied, and the desired result of the visualization.

Surface Rendering:

Surface-rendering techniques characteristically require the extraction of contours (edges)

that define the surface of the structure to be visualized. An algorithm is then applied that places surface patches or tiles at each contour point, and, with hidden surface removal

and shading, the surface is rendered visible.

ADVANTAGE:

Relatively small amount of contour data, resulting in fast rendering speeds.

Standard computer graphics techniques can be applied, including shading models

(Phong, Gouraud).

The contour-based surface descriptions can be transformed into analytical

descriptions, which permit use

with other geometric-visualization packages [e.g., computer-assisted design andmanufacturing (CAD/CAM) software]

Contours can be used to drive machinery to create models of the structure.

Other analytically defined structures can be easily superposed with the surface-

rendered structures.

DISADVANTAGES:

 Need to discretely extract the contours defining the structure to be visualized.

Other volume image information is lost in this process, which may be importantfor slice generation or value measurement.

Any interactive, dynamic determination of the surface to be rendered is

 prohibited, because the decision has been made during contour extractionregarding specifically which surface will be visualized.

Due to the discrete nature of the surface patch placement, surface rendering is

 prone to sampling

and aliasing artifacts on the rendered surface.

Page 8: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 8/23

.

Volume Rendering:

One of the most versatile and powerful image display and manipulation techniques is

volume rendering. Volume-rendering techniques based on ray-casting algorithms have

generally become the method of choice for visualization of 3-D biomedical volumeimages.

Ray-tracing model is used to define the geometry of the rays cast through the

scene(volume of data). To connect the source point to the scene, for each pixel of thescreen, a ray is defined as a straight line from the source point passing through the pixel.

To generate the picture, the pixel values are assigned appropriate intensities ‘‘sampled’’

 by the rays passing everywhere through the scene. For instance, for shaded surfacedisplay, the pixel values are computed based on light models (intensity and orientation of 

light source(s), reflections, textures, surface orientations, etc.) where the rays have

intersected the scene.

There are two general classes of volume display: transmission and reflection. For transmission-oriented displays, there is no surface identification involved. A ray passes

totally through the volume, and the pixel value is computed as an integrated function.There are three important display subtypes in this family: brightest voxel, weighted

summation, and surface projection (projection of a thick surface layer). For all reflection

display types, voxel density values are used to specify surfaces within the volume image.Three types of functions may be specified to compute the shading—depth shading, depth

gradient shading, and real gradient shading.

Page 9: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 9/23

Full-gradient volume-rendering methods can incorporate transparency to show two

different structures in the display, one through another one. The basic principal is to

define two structures with two segmentation functions. To accomplish this, a doublethreshold on the voxel density values is used. The opaque and transparent structures are

specified by the thresholds used. A transparency coefficient is also specified. The

transparent effect for each pixel on the screen is computed based on a weighted functionof the reflection caused by the transparent structure, the light transmission through that

structure, and the reflection of the opaque structure.

Examples of  

interactive volume-

rendering operations,

including selective

surfaces, cut-planes,

orthogonal dissections,

and render masks,

which permit mixing

of rendering types(e.g., a transmission

projection within a

reflection surface.)

ADVANTAGES:

Direct visualization of the volume images without the need for prior surface or object segmentation, preserving the values and context of the original image data.

Application of various different rendering algorithms during the ray-casting

 process.

Surface extraction is not necessary, as the entire volume image is used in this

rendering process, maintaining the original volume image data.

Capability to section the rendered image and visualize the actual image data in the

volume image and to make voxel-value based measurements for the renderedimage.

The rendered surface can be dynamically determined by changing the ray-casting

and surface recognition conditions during the rendering process.

Display surfaces with shading and other parts of the volume simultaneously.

Displays data directly from the gray-scale volume.

.

Page 10: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 10/23

CONCEPT OF 4D VISUALIZATION:

In the field of scientific visualization, the term "four dimensional visualization"

usually refers to the process of rendering a three dimensional field of scalar values. While

this paradigm applies to many different data sets, there are also uses for visualizing data

that correspond to actual four-dimensional structures. Four dimensional structures havetypically been visualized via wire frame methods, but this process alone is usually

insufficient for an intuitive understanding. The visualization of four dimensional objects

is possible through wire frame methods with extended visualization cues, and through raytracing methods. Both the methods employ true four-space viewing parameters and

geometry. The ray tracing approach easily solves the hidden surface and shadowing

 problems of 4D objects, and yields an image in the form of a three-dimensional field of RGB values, which can be rendered with a variety of existing methods. The 4D ray tracer 

also supports true four-dimensional lighting, reflections and refractions.

The display of four-dimensional data is usually accomplished by assigning three

dimensions to location in three-space, and the remaining dimension to some scalar 

 property at each three-dimensional location. This assignment is quite apt for a variety of four-dimensional data, such as tissue density in a region of a human body, pressure

values in a volume of air, or temperature distribution throughout a mechanical object.

Viewing in Three-Space

The first thing to establish is the viewpoint, or viewer location. This is easily done by

specifying a 3D point in space that marks the location of the viewpoint. This is called the

from-point or viewpoint.

3D Viewing Vectors and From, To Points The Resulting View

The next thing to establish is the line of sight. This can be done by either specifying aline-of-sight vector, or by specifying a point of interest in the scene. The point-of-interest

method has several advantages. One advantage is that the person doing the rendering

usually has something in mind to look at, rather than some particular direction. It also has

the advantage that you can ``tie'' this point to a moving object, so we can easily track theobject as it moves through space. This point of interest is called the to-point.Now to pin

down the orientation of the viewer/scene ,a vector is specified that will point straight up

after being projected to the viewing plane. This vector is called the up-vector.

Page 11: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 11/23

Since the up-vector specifies the orientation of the viewer about the line-of-sight, the

up-vector must not be parallel to the line of sight. The viewing program uses the up-

vector to generate a vector orthogonal to the line of sight and that lies in the plane of theline of sight and the original up-vector.

If we're going to use perspective projection, we need to specify the amount of 

 perspective, or ``zoom'', that the resultant image will have. This is done by specifying theangle of the viewing cone, also known as the viewing frustum. The viewing frustum is a

rectangular cone in three-space that has the from-point as its tip, and that encloses the

 projection rectangle, which is perpendicular to the cone axis. The angle between oppositesides of the viewing frustum is called the viewing angle. It is generally easier to let the

viewing angle specify the angle for one dimension of the projection rectangle, and then to

tailor the angle of the perpendicular angle of the viewing frustum to match the other 

dimension of the projection rectangle.

The greater the viewing angle, the greater the amount of perspective (wide-angle effect),

and the lower the viewing angle, the lower the amount of perspective (telephoto effect).

The viewing angle must reside in the range of 0 to pi, exclusive.

The 3D Viewing Vector and Viewing

Frustum 

The angle from D to From to B is the

horizontal viewing angle, and the angle from

A to From to C is the vertical viewing angle.

To render a three-dimensional scene, we use

these viewing parameters to project the scene

to a two-dimensional rectangle, also knownas the viewport. The viewport can be thought of as a window on the display screen

 between the eye (viewpoint) and the 3D scene. The scene is projected onto (or 

``through'') this viewport, which then contains a two-dimensional projection of the three-dimensional scene.

Viewing in Four-Space

To construct a viewing model for four dimensions, the three-dimensional viewing modelis extended to four dimensions.

Three-dimensional viewing is the task of projecting the three-dimensional sceneonto a two-dimensional rectangle. In the same manner, four-dimensional viewing is the

 process of projecting a 4D scene onto a 3D region, which can then be viewed with

regular 3D rendering methods. The viewing parameters for the 4D to 3D projection aresimilar to those for 3D to 2D viewing.

Page 12: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 12/23

As in the 4D viewing model, we need to define the from-point. This is conceptually

the same as the 3D from-point, except that the 4D from-point resides in four-space.

Likewise, the to-point is a 4D point that specifies the point of interest in the 4D scene.

The from-point and the to-point together define the line of sight for the 4D scene. The

orientation of the image view is specified by the up-vector plus an additional vector called the over-vector. The over-vector accounts for the additional degree of freedom in

four-space. Since the up-vector and over-vector specify the orientation of the viewer, theup-vector, over-vector and line of sight must all be linearly independent.

4D Viewing Vectors and Viewing Frustum

The viewing-angle is defined as for three-dimensional viewing, and is used to size one side

of the projection-parallelepiped; the other two

sides are sized to fit the dimensions of the projection-parallelepiped. For this work, all three

dimensions of the projection parallelepiped are

equal, so all three viewing angles are the same.

RAY TRACING ALGORITHM:

Raytracing solves several rendering problems in a straight-forward manner,

including hidden surfaces, shadows, reflection, and refraction. In addition, raytracing is

not restricted to rendering polygonal meshes; it can handle any object that can beinterrogated  to find the intersection point of a given ray with the surface of the object.This property is especially nice for rendering four-dimensional objects, since many N-

dimensional objects can be easily described with implicit equations.

A 2x2x2 4D Raytrace Grid

Other benefits of raytracing extendquite easily to 4D. As in the 3D case, 4D

raytracing handles simple shadows

merely by checking to see which objects

obscure each light source. Reflectionsand refractions are also easily

generalized, particularly since the

algorithms used to determine refractedand reflected rays use equivalent vector 

arithmetic.

Page 13: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 13/23

VOLOCITY- A RENDERING SYSTEM“New dimensions in High Performance Imaging

Volocity is the realization of Improvision's objective to provide the scientist with aninteractive volume visualization system that will run on a standard desktop computer.

Volume interactivity is the key to providing the user with an enhanced perception of depth and realism. Interactivity also allows the scientist to rapidly explore and understandlarge volumes of data. The highly advanced technology developed exclusively by

Improvision for rapid, interactive volume visualization of 3D and 4D volumes has

received several awards for innovation and is the subject of worldwide patent

applications.

Volocity is the first true color 4D rendering system designed for biomedical imaging.

It uses new highly advanced algorithims to provide high-speed, easy to use, interactive

rendering of time resolved color 3D volumes. Volocity allows the user to visualize a 3D

object and then observe and interact with it over time, for the first time providing

scientists with a system to dynamically visualize both the structure and the purpose of  biological structures.

Volocity Acquisition:

The Volocity Acquisition Module is designed for high performance acquisition of 3Dsequences. It provides easy to use, high speed image capture capability and is compatible

with a range of scientific grade cameras and microscopes.

Volocity Acquisition incorporates a unique parallel processing

and video streaming architecture. This technology is designed

to acquire image sequences direct to hard disk at the maximumframe rate possible from each supported camera. The direct to

disk streaming technology has the additional benefit of continuously saving acquired data.

Images are captured directly into an Image Sequence window

and can be exported as QuickTime or AVI movies.

Features

• Fast and highly interactive volume exploration in 3D and 4D. • High speed parallel imaging and video streaming architecture.

• Fly-through rendering for visualizing events inside biological structures.• Real time Auto Contrast for fast focusing and acquisition

• Intuitively easy to use for increased productivity. 

• Object classification, measurement and tracking in all dimensions. 

• High quality restoration of confocal and wide field microscope images. 

Page 14: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 14/23

Volocity Visualization

The Volocity Visualization Module provides an extensive

range of visualization and publication features. TheVolocity 3D view enables the user to interactively

explore a 3D rendered object. Volocity Visualization also

includes the Movie Sequencer, a unique authoring toolenabling the user to create and store a volume animation

template. This template or Movie Sequence can be

applied to any 3D Rendering View to create a series of  pictures, for export as an AVI or QuickTime movie file.

The Movie Sequencer is an easy to use feature for 

collection of visually unique animations, which clearly

 present the information of relevant scientific interest.

Features:

• Interactive rendering of high resolution 3D and 4D volumes.

• Rapid publication of 3D and 4D movies as AVI, QT and QTVR files.

• A versatile range of 3D rendering modes for different sample types.

• Perspective rendering for enhanced realism.

• Fly-through rendering for visualizing events inside biological structures.

• Easy to use animation controls for playback of time resolved volumes.

Volocity Classification

Volocity Classification is designed to identify, measure and track biological structures

in 2D, 3D and 4D. This unique module incorporates innovative new classificationtechnology for rapid identification and quantitation of populations of objects in 2D and

3D.The Classifier Module enables the user to 'train' Volocity to automatically identify

specific biological structures. Complex classification protocols for detecting objects can be created and saved to a palette and then executed with a. Classifiers can be applied to a

single 3D volume, to multi-channel 3D volumes and to time resolved volumes.

Features:

Rapid classification and measurement of 2D, 3D and 4Dimages.

• Automatic tracking of objects in 2D and 3D.

• Overlay of measured objects on a 2D image or 3D

volume.

• Comprehensive range of intensity, morphological and

volume measurements.

Page 15: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 15/23

• Measurements from multiple channels for channel comparison and colocalization.

• Data export to industry standard spreadsheets.

Volocity Restoration

Volocity Restoration includes restorationalgorithms and measured or calculated PSF

generation options for confocal and wide fieldmicroscopes. The Volocity restoration algorithms

are designed for rapid, high quality restoration of 4D and 3D volumes and for accurate comparison

and quantitation of time resolved changes.

The Iterative Restoration algorithm is an award

winning restoration algorithm developed byImprovision from published Maximum Entropy

techniques. The Iterative Restoration feature is anexceptional technique for eliminating both noise

and blur to produce superior restoration resultsand significant improvement in resolution in XY and Z.

The Fast Restoration algorithm is an ultra-fast routine developed by Improvision.This algorithm uniquely uses every voxel in the volume in a single pass process to

improve both the visual quality and the precision of the result. This feature isextremely fast to compute and produces superior results when viewed in XY.

Features

• Iterative Restoration for improvement of XY and Z resolution.

• Fast Restoration for improvement of XY resolution. 

• Confocal and Wide Field PSF Generator. 

• Tools for illumination correction. 

• Measured PSF Generator for confocal and wide field images. 

• Batch processing of 3D sequences. 

Page 16: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 16/23

4D VISUALIZATION IN LIVING CELLS:

Due to recent developments in genetic engineering, cell biologists are now able toobtain cell-lines expressing fusion proteins constituted with a protein of interest and with

an auto fluorescent protein: GFP. By observing such living cells with a confocalmicroscope, it is possible to study their space-time behavior.

Because they correspond to the dimensionality of what the physical reality is incontrast with 2D or/and 3D images which are 'only' reductions by projection or time

fixation, 4D images provide an integral approach to the quantitative analysis of motion

and deformation in a real tri-dimensional world over the time.

OBJECTIVES:

o To easily visualize the evolution of the structures in 3D during all thesteps of the experiment.

o To be able to select one given structure in order to study its spatial

 behavior relatively to others (fusion, separation.)

o To measure parameters such as: • shape, volume, number, relative spatial

localization for each time point, • trajectories and speed of displacementfor each structure either alone or relatively to the others

Page 17: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 17/23

o To cumulate the data of several structures within many cells and in

different experimental conditions in order to model their behavior.

Volumes Slicing and Projection: Reduction from 4D to 3D:

The slicing operation consists in choosing one of the parameters corresponding to the

dimensions {x, y, z, t}, and giving it a constant value. The most usual choice is to

consider the data for a fixed value of the time t. In the projection operation, for each {x,

y, z} volume at a given time, a single 2D image is obtained by integrating the data alongan axis, the z-axis for example. This can be done using a Maximum Intensity Projection

algorithm. Then, all these projection images are used to build a new volume (x, y, t), as

illustrated in Figure. It has the advantage under certain conditionsto correctly illustrate topological changes over time.

As after these operations we obtain “classical” 3D volumes, we can apply 3Dvisualization methods, like iso surfacing, direct volume rendering and others to compare

their effectiveness to correctly analyze and interpret all the information contained in the

series.

Space-Time Extraction

Here the idea is to model the objects of interest and their evolution, with a 4D tool that is

also suited for visualization and quantification.

Multi-Level Analysis:

The reconstruction of one object in the 4D data can be applied for very kind of objects,

even if one is included inside another one.

Figure shows two levels correspondingto the nucleolus and UBF-GFP spots.

For example, one can compute the

evolution of both the nucleoli and the

UBF-GFP proteins. If the nucleoli aremoving fast, the movements of the UPF-

GFP proteins must significantly be

corrected, to take it into account.

IMAGE PROCESSING TECHNIQUE:

Page 18: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 18/23

Parameter Gluing:

A main scientific approach is to reduce the complexity of a phenomenon by choosing

a parameter of the system, setting it to a defined value (gluing it!), and analyze how theother parameters evolve.

The trajectories can be represented in threedifferent modes, according to thespecifications of the user. Their normal

representation is via a set of cylinders

showing the evolution of the objects, withsmall sphere for the topological breaks. This

representation is enhanced by modifying the

radius of cylinders according to the volumeof the objects (see Figure ).

Two proteins merged into one

Integration of time to a spatial dimension:

In the last visualization mode, time is integrated to a

spatial dimension, leading for example to a {x, y, z+t} 3D

representation (see Figure). This approach has theadvantage of better presenting the data variations

according to time. It has the advantage not to alter the

data during the processing. It produces a representation of the evolution of the center of mass of the object.

Space-Time deformable model:It enables the representation of the evolution of the objects, and different visualization

modes.

Page 19: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 19/23

Figure shows that the evolution of the spots of the upper part of the image is not easy to

qualify. Sometimes false merges occur: when two spots are becoming closer, they may be

merged by the deformable model. The resolution of the model is thus important for thereconstruction.

WORKSTATION FOR ACQUISITION, RECONSTRUCTION AND

VISUALIZATION OF 4D IMAGES OF HEART.

ObjectivesA Workstation is developed for the acquisition, reconstruction, processing andvisualization of 4D images of the heart. These images are obtained from two-dimensional

echocardiography equipments using the method of transtoracic rotational sweep.

Methodology

One important step to the reconstruction of 4D images of the heart, is to build anechocardiography database. This is generally obtained by scanning the heart with the

ultrasound sheaf; In this work, the method of Transtorasic Rotational Fan is used. In this

method, the transducer is rotated on its longitudinal axis

Transtoracic rotational sampling, apical see. a) Position and movement of the

transducer. b) Two-dimensional Echo obtained on the slice plane 1. c) Volume swept

by the ultrasonic sheaf.

To obtain a sequence of 4D-data of the heart, a commercial two-dimensional

echocardiography equipment is used, but the processes involved are complicated. This is because the volumetric images should be generated during the heart cycle, they should be

obtained starting from 2D echoes captured in different times and in different planes of acquisition. Additionally, there are diverse anatomical events that generate spatial noise

and the elimination of small anatomical detail during the 4D-data acquisition can also

 produce errors. These inaccuracies can degenerate the reconstructed image. Theacquisition of data is synchronized with the breathing rhythm and the heart rhythm; the

manipulation of the ultrasonic transducer is realized by a motorized servo-mechanism

Page 20: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 20/23

controlled by computer. A graphic processing system is also used to allow the control of 

the whole station, the processing and visualization of the 4D-database. The figure 2

shows a scheme of the designed Workstation.

A scheme of the designed Workstation. The ultrasonic test is realized in real time

using a two-dimensional echo graphic ESOATE PARTNER model AU-3 with a

sectoral multi-frequency transducer of 3.5/5.0 MHz.

RESULT

Three-dimensionalImage of an aortic bi-

valve. To the left in

dyastole and to the right

in systole. It isappreciated an augment

of the border of thevalves and the completeopening of the same

ones.

The obtained results are highly satisfactory. The 4D images of the heart obtained

using the Workstation are of high quality; it shows the details of the heart cavities and thevalvular structure. From the clinical point of view this acquires great importance since the

Page 21: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 21/23

medical exam is simple and non-invasive. Moreover, it doesnít need sophisticated

equipments, and it can be installed in a consultory. Additionally, objective data from the

heart anatomy is obtained. These images can be a useful guide for the cardiovascular surgeon.

4D IMAGE WARPING FOR MEASUREMENT OFLONGITUDINAL BRAIN CHANGES:

For robustly measuring temporal morphological brain changes, a 4D image warpingmechanism can be used. Longitudinal stability is achieved by considering all temporal

MR images of an individual simultaneously in image warping, rather than by individually

warping a 3D template to an individual, or by warping the images of one time-point tothose of another time-point. Moreover, image features that are consistently recognized in

all time-points guide the warping procedure, whereas spurious features that appear 

inconsistently at different time-points are eliminated. This deformation strategy

significantly improves robustness in detecting anatomical correspondences, thereby

 producing smooth and accurate estimations of longitudinal changes. The experimentalresults show the significant improvement of 4D warping method over previous 3D

warping method in measuring subtle longitudinal changes of brain structures.

METHOD:

4D-HAMMER, involves the following two steps:

(1) Rigid alignment of 3D images of a given subject acquired at different time points, in

order to produce a 4D image. 3D-HAMMER is employed to establish thecorrespondences between neighboring 3D images, and then align one image (time t) to its

 previous-time image (t-1) by a rigid transformation calculated from the establishedcorrespondences.

(2) Hierarchical deformation of the 4D atlas to the 4D subject images, via a hierarchical

attribute-based matching method. Initially, the deformation of the atlas is influenced primarily by voxels with distinctive attribute vectors, thereby minimizing the chances of 

 poor matches and also reducing computational burden. As the deformation proceeds,

voxels with less distinctive attribute vectors gradually gain influence over thedeformation.

Page 22: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 22/23

Comparing the performances of 4D- and 3D- HAMMER inestimating the longitudinal changes of from a subject.

MED-SANAREMedical Diagnosis Support System within a Dynamic

Augmented Reality Environment

Advanced medical imaging technology allows the acquisition of high resolved 3D

images over time i.e.4D images of the beating heart. 4D visualization and computer 

supported precise measurement of medical indicators (ventricle volume, ejection fraction,wall motion etc.) have the high potential to greatly simplify understanding of the

morphology and dynamics of heart cavities, simultaneously reduce the possibility of a

false diagnosis. 4D visualization aims at providing all information conveniently in single,

stereo, or interactively rotating animated views.

The goal of the 2nd year of the Med-SANARE project is twofold. On one hand a

virtual table metaphor will be utilized to set up a visionary high-end cardiac diagnosis

demonstrator for educational purpose that makes use of augmented reality (AR)techniques. On the other hand a Cardiac Station will be implemented as functional

reduced solution that supports image evaluation making use of standard PC-based

technology. The functionality offered will be sufficient to successfully perform the tasksrequired by the diagnostic procedure.

For both systems realistic and detailed modeling and visualization plays a crucial role.

Modeling/Visualization Pipeline:

Figure shows the pipeline for the extraction and visualization of heart cavities from

image data that will be integrated into the Cardiac Station.

Page 23: Pranjal LE 28

7/27/2019 Pranjal LE 28

http://slidepdf.com/reader/full/pranjal-le-28 23/23

The data is either visualized without any preprocessing applying direct volume

rendering, or in the first step segmented by application of semi-automatic 2D/3D

segmentation methods. A subsequent triangulation process transforms the result intohardware renderable polygonal surfaces that can also be tracked over the temporal

sequence. Finally the time-variant model is visualized by application of advanced 5D

visualization methods.

BIBLIOGRAPHY

W. de Leeuw and R. van Liere. Case Study: Comparing Two Methods for Filtering

External Motion in 4D Confocal Microscopy Data. Joint Eurographics .

W.E. Lorensen and H.E. Cline. Marching cubes: a high resolution 3D surface

construction algorithm. Computer Graphics (Siggraph’87 Proceedings), 21(4),

pp163-169.

M. Levoy. Display of surfaces from volume data. Computer Graphics &

Application.

P. Lacroute and M. Levoy. Fast volume rendering using shear-warp factorization of 

the viewing transform. Computer Graphics (Siggraph’94 Proceedings).

B. Cabral, N. Cam and J. Foran. Accelerated volume rendering and tomographic

reconstruction using texture mapping hardware.

O. Wilson, A.V. Gelder and J. Wilhems. Direct volume rendering via 3D textures.

Tech. Rep. UCSC-CRL-94-19, University of California at Santa Cruz.

C. Rezk-Salama, K. Engel, M. Bauer, G. Greiner and T. Ertl. Interactive volume

rendering on standard PC graphics hardware using multi-textures and multi-stage

rasterization. Siggraph & Eurographics Workshop on Graphics Hardware 2000,

2000.

J-O. Lachaud and A. Montanvert. Deformable Meshes with Automated Topology

Changes for Coarse-to-fine 3D SurfaceExtraction. Medical Image Analysis.