A View Class for Cropping Images - android

When you work with still images, you use UIImage to display it. UIImage is the glue between the UIImageViewand the underlying bitmap. The UIImage renders the bitmap to the current graphics context, configures transformation attributes, orientation, and scaling to name a few. However, none of these operations change the underlying representation. They give control over how the bitmap is rendered to the graphics context, but to realize changes to it, you need to work with the Core Graphics Framework.
Working with UIImage’s bitmap can be disorienting because it’s natural to think that the UIImage is a one-to-one mapping to it. It’s not. Often what you see on the screen is a transformation. For example, the UIImage may have a width of 4288 by a height of 2848 while the bitmap’s dimensions may be rotated 90 degrees.
If you work with the bitmap without accounting for these properties, you may scratch your head wondering why after a transformation, the resulting bitmap doesn’t match expectations. Cropping is one such use case.
When a user drags a rectangle over an image, the rectangle’s orientation is with respect to the UIImage and the transformation rendered to the screen. So, if the imageOrientation is set to any value other thanUIImageOrientedUp and you crop the bitmap without factoring this property, the resulting image will appear distorted in different ways.
The other attributes to factor are the image dimensions with respect to the views. It’s most often the case where they have different sizes. Since the user identifies the crop region through the view, the dimensions between the bitmap, view, and crop rectangle must be normalized to match what you see on screen to what gets extracted from the bitmap.

CropDemo App

The remainder of this article describes a simple program called CropDemo. I will use it to demonstrate how to useMMSCropImageView and the helper classes to crop an image.
The app displays a large image at the top of the screen. The user drags a finger over the image to identify the crop region. As the finger drags across the image, a white, transparent rectangle displays. To reposition the rectangle, press a finger on it and move it around the image. Tap outside the rectangle to clear it. To crop the image, press the crop button at the bottom of the screen, and the area covered by the rectangle displays below the original one.
Figure 1 - Application Window
The application has a class and category for you to use unchanged in your own application: MMSCropImageViewand UIImage+cropping.
MMSCropImageView is a subclass of UIImageView. It provides the features for dragging a rectangle over the view, moving it, clearing it, and returning a UIImage extracted from the original one identified by the crop region.
The category named UIImage+cropping adds methods to the UIImage class to crop the bitmap. It can be used independently of the MMSCropImageView class.

Cropping a UIImage

The UIImage+cropping category implements a public method to crop the bitmap and returns it in aUIImage.

-(UIImage*)cropRectangle:(CGRect)cropRect inFrame:(CGSize)frameSize 
The parameter cropRect is the rectangular region to cut from the image. Its dimensions are relative to the second parameter, frameSize.
The parameter frameSize holds the dimensions of the view where the image is rendered.
Since all the user’s inputs are relative to the view, it’s necessary to normalize the dimensions between the view and the image. The approach taken is to resize the bitmap to the view’s dimensions.
The other variable to factor is the image’s orientation. Orientation must be determined to position the crop rectangle over the bitmap and to scale the dimensions in relation to the view as the image rendered may be oriented different than the bitmap.

Steps for Scaling a Bitmap

These are the steps for scaling the underlying bitmap of a UIImage.
One, check the imageOrientation and swap the scale size’s height and width if the image is oriented left or right.
if (self.imageOrientation == UIImageOrientationLeft || self.imageOrientation == UIImageOrientationRight) {
            
   scaleSize = CGSizeMake(round(scaleSize.height), round(scaleSize.width));

}
Two, create a bitmap context in the scale dimensions.
CGContextRef context = CGBitmapContextCreate(nil, scaleSize.width, scaleSize.height, 
CGImageGetBitsPerComponent(self.CGImage), CGImageGetBytesPerRow(self.CGImage)/CGImageGetWidth
(self.CGImage)*scaleSize.width, CGImageGetColorSpace(self.CGImage), CGImageGetBitmapInfo(self.CGImage));
Three, draw the bitmap to the new context.
CGContextDrawImage(context, CGRectMake(0, 0, scaleSize.width, scaleSize.height), self.CGImage);
Four, get a CGImageRef from the bitmap context.
CGImageRef imgRef = CGBitmapContextCreateImage(context);
Five, instantiate a UIImage from the CGImageRef returned in step four.
UIImage* returnImg = [UIImage imageWithCGImage:imgRef];
The variable returnImg has height and width equal to scale dimensions. Download the attached code to view the complete implementation of scaleBitmapToSize: in the file UIImage+cropping.m.

Transposing the Crop Rectangle

The crop rectangle’s origin and size are with respect to the view. Consequently, to properly crop the bitmap, the rectangle must be translated to position it correctly over it based on the orientation property. The methodtranslateCropRect:inbounds: translates its origin within the enclosing boundary (the enclosing boundary is the scaled bitmap dimensions), and depending on the orientation, it swaps the height and width.
It’s best to show this with pictures. The following images depict the origin and size translations that must occur when imageOrientation is left, right, and bottom. The blue square represents the rectangle’s origin, and the red rectangle represents where the original one falls after translation.
Figure 2 - Crop rectangle translation for UIImageOrientationLeft
 
Figure 3 - Crop rectangle for UIImageOrientationRight
 
Figure 4 - Crop rectangle for UIImageOrientationDown
 
The orientation for UIImageOrientationUp does not require a translation because the UIImage and bitmap are identically oriented.
The following code shows the algorithm to transform the origin and size when the orientation isUIImageOrientationLeft.
case UIImageOrientationLeft:
    transformedRect.origin.x = boundSize.height-(cropRect.size.height+cropRect.origin.y);
    transformedRect.origin.y = cropRect.origin.x;
    transformedRect.size = CGSizeMake(cropRect.size.height, cropRect.size.width);
    break;
See that attached code in UIImage+cropping.m for the calculations for all possible orientations.

Cropping Steps

Now that the helper methods are explained, cropping a UIImage becomes quite simple. The first step is to scale the bitmap to the frame size. Since the crop rectangle’s coordinate space exists within the views, the bitmap’s space must match to apply it. An alternative would be to scale the crop rectangle to the bitmap’s coordinate space. I chose the former since it matches the user’s perspective.
UIImage* img = [self scaleBitmapToSize:frameSize];
Next, extract the crop region from the bitmap. It calls the Core Graphics functionCGImageCreateWithImageInRect and passes a translated crop rectangle:
CGImageRef cropRef = CGImageCreateWithImageInRect(img.CGImage, 
 [self translateCropRect:cropRect inBounds:frameSize]);
Finally, it creates a UIImage from the cropped bitmap with the class factory and passes the bitmap along with a scale factor of 1.0 and the source’s orientation. The orientation is key because the returned image displays like its source. If you always use a constant like UIImageOrientationUp, the image will display rotated, mirrored, or a combination of the two depending on the original value.
UIImage* croppedImg = [UIImage imageWithCGImage:cropRef scale:1.0 orientation:self.imageOrientation];
The complete implementation of cropRectangle:inFrame: follows:
/* cropRectangle:inFrame returns a new UIImage cut from the cropArea of the underlying image.  
It first scales the underlying image to the scale size before cutting the crop area from it. 
The returned CGImage is in the dimensions of the cropArea and it is oriented 
the same as the underlying CGImage as is the imageOrientation.
 */
-(UIImage*)cropRectangle:(CGRect)cropRect inFrame:(CGSize)frameSize {
    
    frameSize = CGSizeMake(round(frameSize.width), round(frameSize.height));
    
    /* resize the image to match the zoomed content size
     */
    UIImage* img = [self scaleBitmapToSize:frameSize];
    
    /* crop the resized image to the crop rectangle.
     */
    CGImageRef cropRef = CGImageCreateWithImageInRect
    (img.CGImage, [self translateCropRect:cropRect inBounds:frameSize]);
    
    UIImage* croppedImg = [UIImage imageWithCGImage:cropRef scale:1.0 orientation:self.imageOrientation];
    
    return croppedImg;    
}

Drawing the Crop Rectangle

To identify the crop rectangle, the user drags out a rectangular region over the image. Once drawn, the rectangle can be moved to adjust its origin. The class MMSCropImageView supports the features of drawing and positioning the rectangle and returning the cropped image. It delivers these features in a subclass ofUIImageView.
To draw and clear the crop rectangle, the UIPanGestureRecognizer and the UITapGestureRecognizer are added to the view. The gestures are recognized simultaneously; therefore, recognition must be enabled by implementing the UIGestureRecognizerDelegate method gestureRecognizershouldRecognizeSimultaneouslyWithGestureRecognizer:. Return true for one of the pair combinations but not both.
-(BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer 
shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
   
    /* Enable to recognize the pan and tap gestures simultaneous for both the imageView and cropView
     */
    if ([gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]] 
    && [otherGestureRecognizer isKindOfClass:[UITapGestureRecognizer class]]) {
        
        return YES;        
    }
    
    return NO;    
}
The tap gesture target hides the crop view:
/* hideCropRectangle: the crop view becomes hidden when the user taps outside the crop view frame.
 */
- (IBAction)hideCropRectangle:(UITapGestureRecognizer *)gesture {    
    
    if (!cropView.hidden) {
        
        cropView.hidden = true;
        
        cropView.frame = CGRectMake(-1.0, -1.0, 0.0, 0.0);        
    }    
}
To more accurately identify the gesture’s first touch point, it uses the class DragCropRectRecognizer. It is a subclass of UIPanGestureRecognizer and serves to pinpoint the crop rectangle’s origin. It overrides the method touchesBegan:withEvent: to save the gesture’s first touch point.
This approach was chosen for identifying the origin over setting it when processing the stateUIGestureRecognizerStateBegan because I found the point returned from locationInView: was shifted from your finger first touches.
Here’s the code for identifying the origin:
/* touchesBegan:withEvent: override the UIPanGestureRecognizer to identify the point 
that began the touch gesture.  When the action method is called and the gesture state 
is UIGestureRecognizerStateBegan, the point returned from locationInView is not the point 
that began the gesture. This routine sets the origin of the pan gesture to the first touch point.
 */

-(void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
    
    NSEnumerator* touchEnumerator = [touches objectEnumerator];
    
    UITouch* touch;
    
    while (touch = [touchEnumerator nextObject]) {
        if (touch.phase == UITouchPhaseBegan) {
            self.origin = [touch locationInView:self.view];
            break;
        };
    }
    
    [super touchesBegan:touches withEvent:event];
    
}
As the user continues to move their finger across the image, the pan gesture repeatedly calls its targetdrawRectangle, and it draws the crop rectangle based on the new position of the touch. The crop rectangle is aUIView and a subview of UIImageView having a white transparent background and a solid white boarder. The most important factor in calculating the crop rectangle’s origin and size is to calculate them based on the point that began the pan gesture. This point is referred to as the drag origin.
The first step to calculate the crop rectangle is to determine what quadrant the new point falls relative to the drag origin: upper left (Quadrant I), top right (Quadrant II), bottom right (Quadrant III), or bottom left (Quadrant IV). Once determined, calculate the new origin and the new size.
Here’s the code for calculating the crop rectangle dimensions when the current point falls in the III quadrant. See the method drawRectangle: for all the calculations in the source file MMSCropImageView.m in the attached source files.
cropRect = cropView.frame;
        
CGPoint currentPoint = [gesture locationInView:self];
        
if (currentPoint.x >= dragOrigin.x && currentPoint.y >= dragOrigin.y) {
            
   /* finger is dragged down and right with respect to the 
   origin start (Quadrant III);  cropViews origin is the original touchpoint.
    */
            
   cropRect.origin = dragOrigin;
            
   cropRect.size = CGSizeMake(currentPoint.x - cropRect.origin.x, currentPoint.y - cropRect.origin.y);         
}

Moving the Crop Rectangle

Once the crop rectangle is drawn out, the MMSCropImageView class gives the user the ability to reposition it over the image. It adds a UIPanGestureRecognizer to the crop rectangle’s UIView to respond to move gestures. When the gesture begins, two points are recorded: the touch point referred to as touch origin and the crop rectangle’s origin called drag origin. They remain fixed through the drag operation.
If (gesture.state == UIGestureRecognizerStateBegan) {

    /*  save the crop view frame's origin to compute the changing position
    as the finger glides around the screen.  Also, save the first touch point
    compute the amount to change the frames origin.
     */

    touchOrigin = [gesture locationInView:self];

    dragOrigin = cropView.frame.origin;
}
As the user’s finger moves across the image, it computes the change in x and y from the touch origin. To reposition the crop rectangle, it updates the frame origin by adding the change in x and y to the corresponding variables in the drag origin. All computations are relative to those starting points.
CGFloat dx, dy;

CGPoint currentPt = [gesture locationInView:self];

dx = currentPt.x - touchOrigin.x;

dy = currentPt.y - touchOrigin.y;

cropView.frame = CGRectMake(dragOrigin.x + dx, dragOrigin.y + dy, cropView.frame.size.width, cropView.frame.size.height);

Special Handling for Tap Gesture

A gesture works its way up the subview chain in search of a handler until it reaches the parent. If the subview does not support the gesture and the parent does, the parent’s handler executes. This could have adverse consequences if the operation is not valid for the subview.
In this example, a tap on the image outside of the crop rectangle removes it. If the crop view does not handle the tap gesture and the user taps inside the rectangle, the parent’s handler is called and removes it. To prevent the parent handler from executing, the tap gesture is added to the crop view with the default target.
// Create the swall gesture and attach it to the cropView.
swallowGesture = [[UITapGestureRecognizer alloc] init];

[cropView addGestureRecognizer:swallowGesture];

Points of Interest

During the research and development of this class, I spent a considerable amount of time working through resolving differences in the sharpness of the cropped image versus the original image when extracted from a magnified rendering. Though the cropped image had the same pixel dimensions of the magnified crop region, it appeared pixelated when displayed on the screen.
I can’t tell you how many countless hours I spent trying to understand and resolve the small difference in sharpness. When I finally convinced myself that I was extracting the image from the correct origin, height, and width and that the UIImageView dimensions were identical, I began to look elsewhere. That’s when I said to myself, let me see how it appears on an actual device versus the simulator.
Voila! The cropped image had identical sharpness as the magnified original without algorithm changes. I concluded the differences were an artifact of the simulator.
Since I had an explanation and devoted too much time to it, I did not attempt to understand what it was about the simulator that showed this behavior. If any reader has some insights, please share them in the comments or contact me via email. As you may tell, it continues to gnaw at me, but not enough to research further.

Using the Code

The code for this example is attached to the article in a zip file. You can also find it on github athttps://github.com/miller-ms/MMSCropImageView.
To use the code in your own app, select the custom class MMSCropImageView for the image view widget in your storyboard. If you do not use storyboards, create one in the view controller where you plan to show the image.
MMSCropImageView *cv = [[MMSCropImageView alloc] initWithFrame:CGRectMake(10, 10, 200, 100)];
Import the header in the files where you will be interacting with the object.
#import <MMSCropImageView.h>
Add an event handler to the view controller of the view where the object is displayed. The event handler is likely connected to a button where you user initiates the crop action.
- (IBAction)crop:(UIButton *)sender {

    UIImage* croppedImage; // = self.imageView.image;

    croppedImage =  [self.imageView crop];

    self.croppedView.image = croppedImage;
}
This example simply displays the returned image in another UIImageView.

No comments:

Post a Comment

Genuine websites to earn money.

If you are interested in PTC sites then this article is for you. I have personally tried many of the sites and found that the best thing ...