pie chart but in another shape - charts

I just wanna ask if it is possible to make a pie chart but in another shape.
An example would be say there were two candidates who ran for governor in a state. I would want to show the results in a chart. I want the shape of the chart to resemble the shape of the geographical location of the state.
I did some digging and this is the only one that showed up which may help me(but not really) https://forums.adobe.com/thread/988130

As your adobe thread implies there are (at least) three issues to consider:
1) you want to show the votes each candidate received as a portion of the area of the state. If your state is nearly square, you could overlay a grid and assign each candidate a number of grid cells according to the votes they received. If the grid cells are county or precinct outlines that works even better, but this isn't a pie chart because a pie chart uses a polar coordinate system.
2) if you really must have a pie chart which is polar, consider that the average viewer may not be able to visually integrate the areas to get meaningful results. Further you will have to integrate the area swept out by the sectors of the pie like a radar screen, and this contour integration is made more difficult by the fact that you must do it numerically. This means you must sample the boundary distance as a function of angular displacement from some center of gravity you have chosen, like the state capitol. But depending on the location of the state capitol, your visual could become even more distorted. Idaho comes to mind.
3) a good compromise might be just to overlay a pie chart on top of a silhouette or map outline of the state with appropriate drop shadows and emphasis to make the pie chart pop as well as the state outline. it would certainly be much quicker as well as much more readable.

Related

Determining area of regions in image based on user input with JavaFX

Introduction
The title is a bit complicated so let's break it down:
I have an image submitted by a user
The image is a top view of a landscape featuring clearly marked regions. For example, if this was a park the image would be a top view of the parks layout.
I need to allow the user to classify different elements in the image and estimate the area occupied by those elements. Continuing with the park analogy; the park may have two pavilions and a sand volleyball court. I must allow the user to mark the points of interest (let's say the volleyball court) and compute their area (given the overall dimensions of the depicted park)
Current Ideas
I think I should create a buffered image and use that as the background of a canvas.
I'm not sure about the user input. My first idea was to have the users drag rectangles associated with a specific feature (ie. red rectangle for volleyball courts) to the region over the image. Rectangles work because the elements are mostly rectangular but I don't know that users can resize rectangles.
To reiterate, the main problem is determining the area occupied by physical structures in a given image. No machine vision, just plain old Mouse Events.
How should I be approaching the user input dilemma? Any APIs I should be digging through?
Please let me know if I can improve the question and explanation.

Color normalization based on known objects

I was unable to find literature on this.
The question is that given some photograph with a well known object within it - say something that was printed for this purpose, how well does the approach work to use that object to infer lighting conditions as a method of color profile calibration.
For instance, say we print out the peace flag rainbow and then take a photo of it in various lighting conditions with a consumer-grade flagship smartphone camera (say, iphone 6, nexus 6) the underlying question is whether using known references within the image is a potentially good technique in calibrating the colors throughout the image
There's of course a number of issues regarding variance of lighting conditions in different regions of the photograph along with what wavelengths the device is capable from differentiating in even the best circumstances --- but let's set them aside.
Has anyone worked with this technique or seen literature regarding it, and if so, can you point me in the direction of some findings.
Thanks.
I am not sure if this is a standard technique, however one simple way to calibrate your color channels would be to learn a regression model (for each pixel) between the colors that are present in the region and their actual colors. If you have some shots of known images, you should have sufficient data to learn the transformation model using a neural network (or a simpler model like linear regression if you like, but a NN would be able to capture multi-modal mappings). You can even do a patch based regression using a NN on small patches (say 8x8, or 16x16) if you need to learn some spatial dependencies between intensities.
This should be possible, but you should pay attention to the way your known object reacts to light. Ideally it should be non-glossy, have identical colours when pictured from an angle, be totally non-transparent, and reflect all wavelengths outside the visible spectrum to which your sensor is sensitive (IR, UV, no filter is perfect) uniformly across all different coloured regions. Emphasis added because this last one is very important and very hard to get right.
However, the main issue you have with a coloured known object is: What are the actual colours of the different regions in RGB(*)? So in this way you can determine the effect of different lighting conditions between each other, but never relative to some ground truth.
The solution: use a uniformly white, non-reflective, intransparant surface: A sufficiently thick sheet of white paper should do just fine. Take a non-overexposed photograph of the sheet in your scene, and you know:
R, G and B should be close to equal
R, G and B should be nearly 255.
From those two facts and the R, G and B values you actually get from the sheet you can determine any shift in colour and brightness in your scene. Assume that black is still black (usually a reasonable assumption) and use linear interpolation to determine the shift experienced by pixels coloured somewhere between 0 and 255 on any of the axed.
(*) or other colourspace of your choice.

stepped color shading in highcharts doughnut chart

I need to create a chart like this using highcharts -
This chart is created using some other charting tool and this specific chart type is called step chart.
Basically what i need to do is for all different categories in the chart displayed above like Efficiency, Mortality etc., they can have different values like 1%, 7%, 51% etc.
And i need to create a shading which will have color codes and will be displayed in scale like image with color code variation upto the accuracy of 1% value for each category.
This means the color code in chart for Efficiency with value of 5% will be different from Mortality with value of 6%. Is this kind of dynamic color shading available in highcharts?
Please keep in mind that i need to replicate the exact looking chart with scale and color coding using highcharts. I also need to apply some aggregation logic and come up with overall score and highlight the overall score value in color scale with some king of marker as you can see in the image.
Thanks for any help you can provide.
I'm not sure about aggregation etc, but seems quite easy - just calculate that values before passing data to the chart. Anyway, it looks like colors don't suit values on the pie. For example: Safety is 17%, but it's green. For me it should be red? I assume that 50% is white/grey (middle) on the color axis, so all values should be reddish. Or maybe 17% od 25% defines color, not percentage of the pie.
Anyway, I think the biggest challenge is to adapt colorAxis from heatmap. Let me help you with that: http://jsfiddle.net/w9nuha8n/1/
(function (H) {
// add colorAxis
H.seriesTypes.pie.prototype.axisTypes = ['colorAxis'];
H.seriesTypes.pie.prototype.optionalAxis = 'colorAxis';
// draw points and add setting colors
H.wrap(H.seriesTypes.pie.prototype, "translate", function (p, positions) {
p.call(this, positions);
this.translateColors();
});
// copy method from heatmap for color mixin
H.seriesTypes.pie.prototype.translateColors = H.seriesTypes.heatmap.prototype.translateColors;
// use "percentage" or "value" or "custom_param" to calculate color
H.seriesTypes.pie.prototype.colorKey = 'percentage';
})(Highcharts);
As you can see, you can set some customized parameter for a pie chart to colorize slices, I guess that will be helpful, for example: http://jsfiddle.net/w9nuha8n/2/ (y sets value, but myParam is used to define colors).
Just an extra note: It's Highcharts, not your second library, so not everything will look exactly the same way as you have in your image. But will be possible to achieve using Renderer (for example top/bottom dotted lines).

famo.us: drawing a pie chart

I need to take some data as input like:
{
"category1" : 200,
"category2" : 153,
"category3" : 310
}
and use it to display a pie chart. The pie will be a donut (I'm going to show some summary text in the empty center area of the donut) and as you can probably guess each "category" will be one slice of the pie based on how much of the overall sum of values it represents. Each piece will be a different color and take up an angle proportional to its value.
I have no idea how to draw a circle with famo.us, let alone an arc of a donut. I also want to handle click events on each piece of pie individually but I'm guessing that's not the tricky part. Thank you!
For one, circles in famous can be made simply by applying a 50% borderRadius property to any Surface element.
When it comes to arcs, there is no way in Famo.us that makes arcs easier to create. You will have to look into canvas, or SVG..
Here is an example of such in canvas..
http://wickedlysmart.com/how-to-make-a-pie-chart-with-html5s-canvas/

How to plot large data vectors accurately at all zoom levels in real time?

I have large data sets (10 Hz data, so 864k points per 24 Hours) which I need to plot in real time. The idea is the user can zoom and pan into highly detailed scatter plots.
The data is not very continuous and there are spikes. Since the data set is so large, I can't plot every point each time the plot refreshes.
But I also can't just plot every nth point or else I will miss major features like large but short spikes.
Matlab does it right. You can give it a 864k vector full of zeros and just set any one point to 1 and it will plot correctly in real-time with zooms and pans.
How does Matlab do it?
My target system is Java, so I would be generating views of this plot in Swing/Java2D.
You should try the file from MATLAB Central:
https://mathworks.com/matlabcentral/fileexchange/15850-dsplot-downsampled-plot
From the author:
This version of "plot" will allow you to visualize data that has very large number of elements. Plotting large data set makes your graphics sluggish, but most times you don't need all of the information displayed in the plot. Your screen only has so many pixels, and your eyes won't be able to detect any information not captured on the screen.
This function will downsample the data and plot only a subset of the data, thus improving the memory requirement. When the plot is zoomed in, more information gets displayed. Some work is done to make sure that outliers are captured.
Syntax:
dsplot(x, y)
dsplot(y)
dsplot(x, y, numpoints)
Example:
x =linspace(0, 2*pi, 1000000);
y1=sin(x)+.02*cos(200*x)+0.001*sin(2000*x)+0.0001*cos(20000*x);
dsplot(x,y1);
I don't know how Matlab does it, but I'd start with Quadtrees.
Dump all your data points into the quadtree, then to render at a given zoom level, you walk down the quadtree (starting with the areas that overlap what you're viewing) until you reach areas which are comparable to the size of a pixel. Stick a pixel in the middle of that area.
added: Doing your drawing with OpenGL/JOGL will also help you get faster drawing. Especially if you can predict panning, and build up the points to show in a display list or something, so that you don't have to do any CPU work for the new frames.
10Hz data means that you only have to plot 10 frames per second. It should be easy, since many games achieve >100 fps with much more complex graphics.
If you plot 10 pixels per second for each possible data point you can display a minute worth of data using a 600 pixel wide widget. If you save the index of the 600th to last sample it should be easy to draw only the latest data.
If you don't have a new data-point every 10th of a second you have to come up with a way to insert an interpolated data-point. Three choices come to mind:
Repeat the last data-point.
Insert an "empty" data-point. This will cause gaps in the graph.
Don't update the graph until the next data-point arrives. Then insert all the pixels you didn't draw at once, with linear interpolation between the data-points.
To make the animation smooth use double-buffering. If your target language supports a canvas widget it probably supports double-buffering.
When zooming you have the same three choices as above, as the zoomed data-points are not continuous even if the original data-points were.
This might help for implementing it in Java.

Resources