Converting Heic to Jpeg Before Upload Xcode Swift 4 Stackoverflow
eecs441§3&four
Chatter with Images Swift
DUE Mon, 02/02, 2 pm
Welcome to the front end for Lab 1! In this lab, we will be using iOS's UIImagePickerController
to add together and manipulate images and videos in Chatter
. We will besides implement the Observer pattern using Swift'south property observer and iOS'southward NotificationCenter
to automate chatts
timeline update when a new listing is downloaded. We assume that you accept completed the back cease server setup, which is described in a separate spec.
The iPhone simulator does not simulate a camera, so you'll need a physical device to complete this lab.
Gif demo
Mail service an prototype and a video:
Right click on the gif and open in a new tab to get a full-size view. To view the gif once again, please hit refresh on the browser (in the tab where the gif is opened).
Uploading images and videos
Images and videos can be uploaded to the server either by picking one from the device'southward photo album or by taking a picture/video with the device's camera. Images will be downloaded and displayed with given chatt
s. On the posting screen, we volition want a push to access the album and one for taking photo and video, and a preview of the image to be posted. On the Master screen showing the chatt
timeline, we will want posted images to show up alongside their respective chatt
s and a button to play back any posted video. Allow'due south get started.
Preparing your GitHub repo
- On your laptop, navigate to
YOUR_LABSFOLDER/
- Create a aught of your
lab0
binder - Rename your
lab0
folder lab1If there'southward
DerivedData
folder in yourlab1/swiftChatter/
binder, delete it. - Push your local
YOUR_LABSFOLDER/
repo to GitHub and make sure there're no git issues
Third-party SDKs and Cocoapods
We will exist using two third-party SDKs in this lab: SDWebImage, to help with prototype downloading, and Alamofire, to help with multipart/form-data
upload. Both of these SDKs are available as CocoaPods. CocoaPods is an open source parcel manager.
Installing Cocoapods
Each of the post-obit two steps tin can take a long fourth dimension. Exist patient.
First install Homebrew
:
laptop$ /bin/bash -c " $(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/Head/install.sh) "
Next, install CocoaPods
using brew
:
laptop$ brew install cocoapods
With that you should be ready to use CocoaPods
!
If y'all encounter any issues with the installation of Homebrew
or CocoaPods
, bank check the Homebrew Common Issues Guide.
Installing third-party SDKs
With CocoaPods installed, we at present create a Podfile for the third-party SDKs we'll be using in this lab. Go to your project folder and create a Podfile
by running pod init
.
laptop$ cd # to <YOUR_LABSFOLDER/lab1/swiftChatter> projection folder laptop$ ls -CF swiftChatter/ swiftChatter.xcodeproj/ laptop$ pod init
This should create a Podfile
in your swiftChatter
projection folder. This file volition define your projection's dependencies. Open it and replace the contents with the following:
# Uncomment the next line to define a global platform for your project platform :ios, '15.4' target 'swiftChatter' do # Comment the side by side line if you don't desire to use dynamic frameworks use_frameworks! # Pods for swiftChatter pod 'SDWebImage' pod 'Alamofire' end
Save the file and run:
Cocoapods now runs on Apple Silicon (M1) bit natively, we no longer need to use
arch -x86_64
equally was previously necessary.
To update the
Podfile
, for example afterward y'all upgraded iOS, you could use the commandpod update
.
In Xcode's File > Open
choose YOUR_LABSFOLDER/lab1/swiftChatter
.
.xcworkspace
When you add a pod to your project, you lot turn your humble "project" into a grand "workspace". The pod install
command creates an .xcworkspace
file for your awarding, which subsequently Xcode will utilize to organize your project, in place of your original projection file. If you usually clicked on your .xcodeproj
file to open your project, you'll be clicking on the .xcworkspace
file instead.
Requesting permissions
Your app must first request user's permission to admission the device's photographic camera, photo album, and mic. Navigate to the file swiftChatter/Info.plist
. Right click on the empty infinite and select Add Row
. Select App Category
in the driblet down menu. Then enter Privacy - Microphone Usage Clarification
(overwriting App Category:
) and in the Value
field to the right enter the reason you want to admission the mic, for case, "to record audio chatt". What yous enter into the value field volition be displayed to the user when seeking their permission (screenshot). Repeat the process and give justification to request permission for two more privacy-protected features: Privacy - Photo Library Usage Clarification
and Privacy - Camera Usage Description
.
When yous try to admission the photo library, camera, or mic, iOS will automatically bank check access permission and, if it is your app's start attempt to admission these, iOS will also automatically prompt the user for permissions.
If you accidentally choose "Allow Once" or "Don't Allow" when your app requests permission, get to
Settings > Privacy > Photographic camera
, select your app and tap "Inquire Side by side Time" to reset information technology. Similarly forMicrophone
andPhotos
.
Manual permission
Permission checking and request can exist done manually. Yet, if you let iOS do it, and the user grants permission, the operation simply proceeds. Whereas if y'all do information technology manually, the user would have to re-initiate the operation.
Primary.storyboard
Video: as with lab0, nosotros have recorded a video showing you how to piece of work with Xcode, especially the Storyboard editor. You may want to follow along the video every bit you complete setting up the UI. The video only shows the UI role, y'all'll notwithstanding need to return here to consummate the lawmaking for the lab. Depending on your version of Xcode, the screens in the video may not await exactly the aforementioned as what you see on your Xcode.
UI for posting
Under the Post Scene
we want to add an epitome preview (photo simply, no video) and 2 buttons, one for getting a photograph or video from an album, and the other for taking a flick or video with the camera.
Let u.s. kickoff add the album button (screenshot):
- In your project'south
Main.storyboard
, click on the+
sign at upper correct of the master window,or select on master menuView > Evidence Library
(⇧⌘L). - Add a
Push
to yourPost Scene
(drop it any where below theMessage Text View
, don't worry well-nigh verbal placement for now) - In the
Identity Inspector
, in theCertificate
department, enter "Album Button" in theLabel
field. - We desire this button to display every bit an icon. Become to its
Attribute Inspector
, click on the field next toImage
(eight item downward), enter "photograph", and click on the "photograph" icon in the driblet-downwards menu (don't hitting return). - In the third field from the top, confirm that the
Title
isObviously
and in the fourth field, correct below theTitle
, delete the text at that place until you meet the greyed out placeholder "Default Title" (otherwise the championship text will prove up in white font beyond your icon). - In the
Default Symbol Configuration
section, the first item saysConfiguration
. Set it toIndicate Size
, then set the point size (next detail down) to 30. When setting the size of a push, make sure it is big plenty for users to tap easily.
We now piece of work on the constraints for Album Button
. Recall that Motorcar Layout requires four pieces of information well-nigh each UI element:
- the \(x\)- and
- \(y\)-coordinates of one of the element's cornersouthward,
- the element's width,
- and its meridian.
Allow's set these 4 pieces of data about the Album Button
:
- With the
Album Button
selected, click on theAdd together New Constraints
icon on the bottom correct (third from right) of the Interface Architect pane. - Set the acme constraint to 16.
- Prepare the trailing (correct) constraint to 12.
- Set the width and meridian both to forty.
- Click the push button
Add together 4 Constraints
. - Double check in the
Size Inspector
that the trailing edge is set against the abaft safety surface area and the elevation border is set confronting the bottom of theBulletin Text View
(screenshot).
Next we add the photographic camera push: repeat the process to a higher place except nosotros'll label the push button Camera Push
and we'll give it the icon camera
. Requite information technology the aforementioned iv constraints as we did the Album Button
. This time though y'all should ostend that Interface Builder has correctly gear up the trailing edge of Photographic camera Button
against the leading (left) edge of Album Button
(screenshot).
To show a preview of the image to be posted, from the Object Library add an Image View
to the left of your Photographic camera Push button
, below the Bulletin Text View
. With the Epitome View
selected, click the Add New Constraints
icon and add these four constraints:
- Set the top constraint to 12.
- Gear up the leading (left) constraint to 16
- Fix the width and peak both to 128
- Click
Add four Constraints
- In the
Size Inspector
pane, confirm that the meridian edge is constrained against the bottom ofBulletin Text View
and the leading edge is constrained against leading Safe Area.
Your Post Scene
could at present look something like this screenshot.
UI for viewing
Now, on your Churr Scene
, click on ChattTableCell
to select it on the storyboard layout pane, drag down the lesser of ChattTableCell
to increase its pinnacle.
From the Object Library, add an Image View
to ChattTableCell
beneath Message Label
. Set the following v constraints (screenshot):
- Top edge to bottom of Bulletin Characterization: 8
- Leading border to SuperView leading: 0 (bank check
Constrain to margins
) - Bottom edge to SuperView bottom: 0
It is important to constrain the SuperView bottom against the bottom of Image View
, to preclude ChattTableCell
assumimng its default summit.
- Both width and peak ready to 128
Unless you force set the width and height of Image View
(set priority of both to 1000), downloaded images may expand to make full your cell.
- In the
Size Inspector
pane, ostend that the top edge is constrained confronting the bottom ofMessage Text View
and the leading border is constrained against leading SuperView and the lesser edge is constrained against lesser SuperView.
Next add together the video push: echo the process used to add the Album Push
to a higher place, except we'll label the button Video Button
and we'll give it the icon play.rectangle.make full
. Give it the following four constraints:
- Prepare the height constraint to 12.
- Set the trailing (correct) constraint to 0 (check
Constrain to margin
) - Set the width and top both to 40
- Click
Add 4 Constraints
- In the
Size Inspector
pane, confirm that the tiptop border is constrained confronting the bottom ofTimestamp Label
and the trailing border is constrained confronting abaftSuperView Margin
. - You may need to reduce the width of your
Bulletin Characterization
to 250 and set its trailing edge to be≥ 0
confronting the leading edge ofVideo Button
.
Your Chatter Scene
could now look something like this screenshot.
Connect UI with code
Now we want to create actions for all of these UI elements. Become ready for a lot of ^dragging (Ctl+elevate)!
With your Image View
selected, pull up the assistant editor (screenshot).
If the Banana Editor is not showing your ChattTableCell.swift
file, click on one of the file switcher at the upper correct corner of the Assistant Editor (it may look similar: < chiliad >
, depending on your Xcode version) until your ChattTableCell.swift
shows upwards. Or you can concur down the option cardinal and click on ChattTableCell.swift
in the Navigation (leftmost) panel. If all else fails, sometimes restarting Xcode helps.
Make sure ChattTableCell.swift
is loaded on your Banana Editor. At present ^drag your Epitome View
from your Chatter Scene
to your ChattTableCell
class. When the Connection
box comes up, choose Outlet
from the drop-down menu and proper name information technology chattImageView
.
Next ^drag the Video Button
into the ChattTableCell
grade. Create an @IBOutlet
variable and name it videoButton
. Now ^drag Video Button
into the ChattTableCell
course one more time, simply this time to create Action
connection and name it videoTapped
. We volition use the @IBOutlet
variable to control the advent and attributes of the push, while the @IBAction
role specifies the action taken the button is tapped.
Motility on to your Post Scene
and select your PostImage View
. The Assistant Editor should automatically load the PostVC.swift
file every bit you click on Image View
. Go ahead and ^drag Prototype View
from the Post Scene
to the PostVC
class to create an @IBOutlet
and name it postImage
.
Finally, ^drag the Album Button
from Postal service Scene
to the PostVC
grade and create an @IBAction
function chosen pickMedia
. We will fill up this function in later. Similarly, create an @IBAction
role called accessCamera
by ^dragging the Camera button
from Post Scene
to the PostVC
class.
Nosotros are now washed with the Storyboard work and can get on with the coding.
UIImagePickerController
Nosotros will be using iOS'south UIImagePickerController
to access the photo anthology and camera. UIImagePickerController
is an iOS class that manages the system interfaces for taking pictures, recording videos, and retrieving items from the user's media library. UIImagePickerController
also manages user interactions—such as paradigm repositioning, zooming, cropping, and video head and tail trimming. To use it nosotros declare PostVC
to conform to 2 consul protocols:
last class PostVC : UIViewController , UIImagePickerControllerDelegate , UINavigationControllerDelegate { private var videoUrl : URL ?
Annotation that we also added the videoUrl
property to hold the video URL.
Nosotros at present add pickMedia(_:)
and accessCamera(_:)
methods to the PostVC
class. The first launches UIImagePickerController
specifying the photo library as the media source. The second launches information technology with the camera as the media source. In both cases, we limit the duration of video to five seconds.
@IBAction func pickMedia ( _ sender : Whatever ) { presentPicker ( . photoLibrary ) } @IBAction func accessCamera ( _ sender : Any ) { if UIImagePickerController . isSourceTypeAvailable ( . camera ) { presentPicker ( . camera ) } else { print ( "Photographic camera not bachelor. iPhone simulators don't simulate the camera." ) } } private func presentPicker ( _ sourceType : UIImagePickerController . SourceType ) { permit imagePickerController = UIImagePickerController () imagePickerController . sourceType = sourceType imagePickerController . delegate = self imagePickerController . allowsEditing = true imagePickerController . mediaTypes = [ "public.image" , "public.movie" ] imagePickerController . videoMaximumDuration = TimeInterval ( 5 ) // secs imagePickerController . videoQuality = . typeHigh present ( imagePickerController , animated : truthful , completion : nothing ) }
To permit user to pick either image or video file and to accept either a photo or record a video, we prepare the imagePickerController
to handle both "public.image"
and "public.flick"
media types. To enable epitome zooming and cropping and video head and tail trimming prior to posting, we set allowsEditing = truthful
. You tin change the videoMaximumDuration
and videoQuality
to a unlike value. Yet, exist mindful that both Django and Nginx have an upper limit on client upload size. If you alter the max elapsing or quality of video, exist sure to adjust the upload threshold of both Nginx and Django accordingly.
UIImagePickerController
will return the selected or recorded image/video through its delegates. If an image is returned, nosotros desire the delegate to put the prototype in the postImage: UIImageView
we've created. Depending on whether the paradigm is edited, the delegate needs to recall it either as originalImage
or editedImage
. If the retrieval is succesful, nosotros resize the image earlier storing it in postImage.image
.
func imagePickerController ( _ picker : UIImagePickerController , didFinishPickingMediaWithInfo info :[ UIImagePickerController . InfoKey : Any ]) { if let mediaType = info [ UIImagePickerController . InfoKey . mediaType ] equally? Cord { if mediaType == "public.prototype" { postImage . epitome = ( info [ UIImagePickerController . InfoKey . editedImage ] equally? UIImage ?? info [ UIImagePickerController . InfoKey . originalImage ] as? UIImage )? . resizeImage ( targetSize : CGSize ( width : 150 , summit : 181 ))
If UIImagePickerController
returned video, we merely store the URL returned in videoUrl
. Continue to complete the above method:
} else if mediaType == "public.movie" { videoUrl = info [ UIImagePickerController . InfoKey . mediaURL ] equally? URL // tin convert to absoluteString Just after picker.dismiss } } picker . dismiss ( animated : true , completion : nil ) }
Nosotros besides need to provide a delegate method to handle the case when UIImagePickerController
cannot return any video/prototype:
func imagePickerControllerDidCancel ( _ picker : UIImagePickerController ) { picker . dismiss ( animated : truthful , completion : nil ) }
We now implement the function .resizeImage(targetSize:)
as an extension to the UIImage
course. Create a new Swift file chosen Extensions.swift
. Nosotros'll gather this and futurity extensions centrally in this file. For now put the post-obit code in it:
import UIKit extension UIImage { func resizeImage ( targetSize : CGSize ) -> UIImage ? { // Effigy out orientation, and utilize information technology to course the rectangle let ratio = ( targetSize . width > targetSize . height ) ? targetSize . top / size . height : targetSize . width / size . width let newSize = CGSize ( width : size . width * ratio , pinnacle : size . height * ratio ) let rect = CGRect ( x : 0 , y : 0 , width : newSize . width , height : newSize . pinnacle ) // Actually do the resizing to the calculated rectangle UIGraphicsBeginImageContextWithOptions ( newSize , imitation , 1.0 ) draw ( in : rect ) let newImage = UIGraphicsGetImageFromCurrentImageContext () UIGraphicsEndImageContext () render newImage } }
Nosotros can at present test our app! Make sure that when you tap the Anthology Button
yous are able to choose an image from the Photograph Library and that it previews. Nosotros can only test camera on a physical device, merely that should be working now also! If your epitome ends upwardly occluding some of the labels and icons on your Post Scene
, you need to piece of work on your layout constraints and so that the elements practice not end up overlapping. Nigh probable, you'd need to set up the priority of the height and width constraints of your Image View
to grand.
Uploading
We showtime demand to append these two new members to the terminate of Chatt
class in Chatt.swift
to hold the image and video URLs:
@ChattPropWrapper var imageUrl : Cord ? @ChattPropWrapper var videoUrl : String ?
Both imageUrl
and videoUrl
use the ChattPropWrapper
property wrapper. When there's no valid URL associated with imageUrl
and videoUrl
, nosotros want the value of these properties to exist nil
String
due south. Unfortunately an empty value in a JSON object can sometimes exist encoded as "aught"
, i.e., a cord with the characters n-u-50-fifty within. The ChattPropWrapper
converts "null"
and the empty string ""
into nil
String. Add together the post-obit form to your Chatt.swift
file:
@propertyWrapper struct ChattPropWrapper { private var _value : Cord ? var wrappedValue : String ? { become { _value } fix { guard let newValue = newValue else { _value = cipher return } _value = ( newValue == "null" || newValue . isEmpty ) ? zip : newValue } } init ( wrappedValue : String ?) { cocky . wrappedValue = wrappedValue } }
Dorsum to PostVC
, we side by side edit our submitChatt(_:)
method to mail the loaded epitome and/or video to our server forth with our chatt
. Supervene upon the content of your submitChatt(_:)
method with the post-obit:
let chatt = Chatt ( username : self . usernameLabel . text , bulletin : self . messageTextView . text , imageUrl : nil , videoUrl : videoUrl ? . absoluteString ) ChattStore . shared . postChatt ( chatt , epitome : postImage . image ) dismiss ( blithe : true , completion : nix )
We will use the Alamofire SDK to upload the image/video using multipart/course-data
representation/encoding.
When a web folio has a form for user to fill up out, such page unremarkably has mutiple fields (east.g., name, address, networth, etc.), each comprising a separate part of the multi-office course. Information from these multiple parts of the form is encoded for sending by HTTP using the native multipart/class-information
representation. One advantage of using this encoding instead of JSON is that binary data tin be sent as is, not encoded into a string of printable characters. Since we don't have to encode the binary information into grapheme cord, we can also stream straight from file to network without having to first load the whole file into retention, assuasive us to send much larger files. These are the two reasons we utilize the multipart/form-data
encoding instead of JSON in this lab.
At the top of your ChattStore.swift
, add together:
so supersede your postChatt(_:)
method with:
func postChatt ( _ chatt : Chatt , image : UIImage ?) { baby-sit let apiUrl = URL ( string : serverUrl + "postimages/" ) else { impress ( "postChatt: Bad URL" ) return } AF . upload ( multipartFormData : { mpFD in if let username = chatt . username ? . data ( using : . utf8 ) { mpFD . append ( username , withName : "username" ) } if allow message = chatt . message ? . data ( using : . utf8 ) { mpFD . suspend ( bulletin , withName : "message" ) } if allow jpegImage = prototype ? . jpegData ( compressionQuality : 1.0 ) { mpFD . suspend ( jpegImage , withName : "image" , fileName : "chattImage" , mimeType : "image/jpeg" ) } if let urlString = chatt . videoUrl , let videoUrl = URL ( string : urlString ) { mpFD . append ( videoUrl , withName : "video" , fileName : "chattVideo" , mimeType : "video/mp4" ) } }, to : apiUrl , method : . post ) . response { response in switch ( response . upshot ) { case . success : cocky . getChatts () print ( "postChatt: chatt posted!" ) case . failure : impress ( "postChatt: posting failed" ) } } }
The lawmaking constructs the "form" to be uploaded as comprising a part named "username" with the field containing the username as in-retentivity data with UTF-8 encoding. Next it appends a part named "bulletin" constructed similarly. And so comes a role named "image" with in-retention information that has been JPEG encoded (no compression in this case). The "filename" is how the data is tagged, information technology tin be any string. The "mimeType" documents the encoding of the data (though it doesn't seem to be used for anything). The last part is named "video", the information is non in retentivity, but rather must be retrieved from the videoUrl
. Upon completion of upload, the response is processed in the provided closure. If the upload succeeded, we call getChatts()
to think the updated listing of chatts
before returning. At this indicate, Xcode volition complain that we're missing some arguments in our call to getChatts()
. You tin can safely ignore this warning. We will update getChatts()
shortly.
Depending on your upload bandwidth, uploading video tin have a long time. Wait for the postChatt: chatt posted!
to print out on your Xcode'due south View > Debug Area > Agile Console
before trying to refresh your app'southward time line to view the new chatt
.
You lot volition likely see a large number of warnings in Xcode panel. As long equally your app doesn't crash, you tin can safely ignore these warnings for this lab.
With the updated PostVC
, yous can at present take or select images and videos and send them to your Chatter
dorsum stop! Since we haven't worked on paradigm/video download, you can verify this by inspecting the content of your chatts
tabular array in the postgres database at the backend.
Permit'due south move on to downloading images from your server to see them in your timeline.
Viewing posted images and videos
We are now at the final step! Getting the image and/or video from our server and showing them in the chatts
!
Recall that MainVC
presents retrieved chatt
southward as a list. When the user taps on the video button of a chatt
, MainVC
must launch AVPlayer
to play back the video. It does this past initializing a new instance of the AVPlayer
with the URL of the video to be played back. The question is, how does MainVC
know which chatt
, which prison cell of the table, the user tapped on, and therefore which video URL to initialize AVPlayer
with?
Which cell was tapped?
Following this stackoverflow posting, nosotros will use a closure to let a cell execute lawmaking in the context of the TableView
.
First in the ChattTableCell
grade, we create a variable to hold the closure:
var playVideo : (() -> Void )? // a closure
When a prison cell is tapped, we only run its closure. Search your ChattTableCell
for the @IBAction func videoTapped(_ sender: UIButton)
that nosotros created earlier when preparing the storyboard. Add the following code to the function:
In summary, your ChattTableCell
class should at present contain these statements:
var playVideo : (() -> Void )? // a closure @IBAction func videoTapped ( _ sender : UIButton ) { self . playVideo ?() }
Alternatives to closure
If you simply desire to segue when a cell is tapped, you can use tableView(... didSelectRowAt indexPath: ...)
and prepare(for seque:, sender:)
as shown in this stackoverflow post. Unfortunately tableView
doesn't have similar provisioning for providing context when a push in a custom prison cell is tapped.
Instead of closure, y'all can register a consul to obtain the tableView context. Even so, the use of closure is the more elegant solution.
Brandish images and playing back videos
Now we return to the MainVC.swift
file and import AVKit
and SDWebImage
at the top of the file. These volition allow us to apply AVPlayer
to play back video and to bear witness downloaded images using progressive rendering respectively.
Progressive JPEG
"Progressive rendering" here is different from progressive JPEG. Progressive JPEG will first download and show a total, albeit low-resolution, version of the paradigm. The resolution then improves over time as more and more data is downloaded. Progressive rendering is a kind of streaming download. You evidence parts of the paradigm as presently as you accept some partial information instead of waiting for the download to complete. All the same, the images used in his lab are so small, the effect is hardly noticable.
import AVKit import SDWebImage
Adjacent, under the tableView
function with the following signature:
override func tableView ( _ tableView : UITableView , cellForRowAt indexPath : IndexPath ) -> UITableViewCell {
add together the following code directly before the return cell
line, to preview the retrieved epitome per chatt
:
if permit urlString = chatt . imageUrl , let imageUrl = URL ( cord : urlString ) { jail cell . chattImageView . sd_setImage ( with : imageUrl , placeholderImage : UIImage ( systemName : "photograph" ), options : [ . progressiveLoad ]) jail cell . chattImageView . isHidden = fake } else { jail cell . chattImageView . image = nil cell . chattImageView . isHidden = true }
The .sd_setImage(with:placeholderImage:options:)
is the progressive rendering extension to UIImageView
from the SDWebImage SDK. Notation that unless you forcefulness set the width and height of your UIImageView
(set priority of both to g) in your Storyboard, downloaded image may expand to fill your cell.
When a UIView
is hidden, it is not visible but the space it would otherwise occupy will simply be left bare. Information technology would be nice if the unoccupied space could exist removed also. Unfortunately iOS doesn't support such a "gone" state. In that location are ways to achieve this, for instance past setting the peak of the UIView
to be 0. Unfortunately it doesn't work reliably, e.one thousand., when fast scrolling a tabular array view.
Now add the following code directly afterward the above, once again earlier the render cell
line:
if permit urlString = chatt . videoUrl , let videoUrl = URL ( string : urlString ) { cell . videoButton . isHidden = false // remember: cells are recycled and reused cell . playVideo = { let avPlayerVC = AVPlayerViewController () avPlayerVC . role player = AVPlayer ( url : videoUrl ) if let histrion = avPlayerVC . thespian { cocky . present ( avPlayerVC , animated : true ) { histrion . play () } } } } else { prison cell . videoButton . isHidden = true cell . playVideo = null }
If a given chatt
contains video, the videoButton
will become visible, and when clicked, it will launch AVPlayerViewController
with its AVPlayer
initialized to the video'south url to play back. When there is no video, we explicitly hide the videoButton
and set its playVideo
to nil
. Recall that table view cells are recycled and reused.
getChatts()
To implement the Observer pattern, we beginning define a couple of properties to use with the Notification Center and add property observer to our chatts
array. At the same fourth dimension, since we are using Alamofire to upload chatt
in postChatt(_: paradigm:)
, we could utilise Alamofire for download also. Replace your chatts
holding declaration and the getChatts(_:)
method in ChattStore
with:
allow propertyNotifier = NotificationCenter . default let propertyName = NSNotification . Name ( "ChattStore" ) var chatts = [ Chatt ]() { didSet { propertyNotifier . post ( proper noun : propertyName , object : goose egg ) } } func getChatts () { baby-sit let apiUrl = URL ( string : serverUrl + "getimages/" ) else { print ( "getChatts: bad URL" ) return } AF . asking ( apiUrl , method : . become ) . responseJSON { response in guard allow data = response . data , response . mistake == zip else { impress ( "getChatts: NETWORKING Mistake" ) render } if permit httpStatus = response . response , httpStatus . statusCode != 200 { print ( "getChatts: HTTP Status: \( httpStatus . statusCode ) " ) return } guard let jsonObj = attempt ? JSONSerialization . jsonObject ( with : data ) as? [ String : Any ] else { print ( "getChatts: failed JSON deserialization" ) render } permit chattsReceived = jsonObj [ "chatts" ] as? [[ String ?]] ?? [] cocky . chatts = [ Chatt ]() for chattEntry in chattsReceived { if ( chattEntry . count == self . nFields ) { self . chatts . append ( Chatt ( username : chattEntry [ 0 ], bulletin : chattEntry [ 1 ], timestamp : chattEntry [ two ], imageUrl : chattEntry [ 3 ], videoUrl : chattEntry [ 4 ])) } else { print ( "getChatts: Received unexpected number of fields: \( chattEntry . count ) instead of \( self . nFields ) ." ) } } } }
Observer
We at present implement the Observer for the chatts
assortment. Add together the following method to your MainVC
class:
@objc private func propertyObserver ( _ event : NSNotification ) { DispatchQueue . main . async { self . tableView . reloadData () } }
Once we take the observer defined, we annals it in the viewDidLoad()
method of MainVC
:
ChattStore . shared . propertyNotifier . addObserver ( self , selector : #selector( propertyObserver(_:) ) , proper name : ChattStore . shared . propertyName , object : nothing )
Equally of iOS 9.0, observers are automatically de-registered when no longer in scope.
At present that MainVC
automatically updates the chatt
timeline whenever the list changes, nosotros don't need to manually update it in refreshTimeline()
. We still keep the function in case user wants to refresh the timeline, for instance, to take hold of up with chatts
posted by other users. Replace your refreshTimeline(_:)
method with:
private func refreshTimeline ( _ sender : UIAction ?) { ChattStore . shared . getChatts () // stop the refreshing animation upon completion: self . refreshControl ? . endRefreshing () }
Finally, back in MainVC.viewDidLoad()
, supercede the call to refreshTimeline(nil)
with a telephone call to ChattStore.shared.getChatts()
, which should then automatically update your screen with downloaded chatts
(if any) on every app launch.
Congratulations, you've successfully added the ability to admission your device'south photo album or photographic camera, upload/download images and videos to/from your server, and display images and play back video in your app'southward feed. We're all done!
Submission guidelines
Important: If you work in team, remember to put your team mate's name and uniqname in lab1 folder's README.md
so that we'd know. Otherwise, we could mistakenly thought that yous were cheating and accidentally report you to the Honor Council, which would be a hassle to undo.
Review your information on the Lab Links canvas. If you've changed your teaming arrangement, from previous lab's, please update your entry. If you're using a different GitHub repo from previous lab'southward, invite eecs441staff@umich.edu
to your GitHub repo and update your entry.
We will but grade files committed to the principal
co-operative. If you use multiple branches, please merge them all to the main branch for submission.
Push your lab1 to its GitHub repo equally fix at the beginning of this spec. Using GitHub Desktop to exercise this, you can follow the steps below:
- Open GitHub Desktop and click on
Current Repository
on the top left of the interface - Click on your
441
GitHub repo - Add Summary to your changes and click
Commit to principal
- If you take a team mate and they have pushed changes to GitHub, y'all'll accept to click
Pull Origin
and resolve whatsoever conflicts before . . . - Finally click on
Button Origin
to push changes to GitHub
Go to GitHub website to confirm that your projection files for lab1 have been uploaded to your GitHub repo under binder lab1
.
Verify that your Git repo is set up correctly: clone your repo and build and run your submission to make sure that it works. On your clone, you may take to run:
before you lot can rebuild your project. You will become Zip point if your lab doesn't build, run, or open.
References
- Apple's UIImagePickerController
- stackoverflow article on its apply
- Base64
- Resizing Images
- UIImage conversion with Base64
- Upload epitome to server using URLSessionUploadTask
- Ingather Box Apple Documentation
- Ingather Box Implementation Instance
- AVPlayer Example
- AVKit
- AVFoundation
- Determine if the admission to photo library is gear up or non - PHPhotoLibrary
- How to check if the user gave permission to utilize the photographic camera?
- iOS view visibility gone
Observer pattern
- Get hands-on with the Cocoa MVC blueprint
- Observers in Swift – Office 1
- Belongings Observers
- NotificationCenter
- removeObserver(_:name:object:)
Paradigm download
- SDWebImage 5.9.0 Docs
- SDWebImage Progressive Image Downloading
- JPEG Formats - Progressive vs. Baseline
- Progressive JPEGs and light-green Martians
Multipart/form-data
- Upload Information using Multipart
- Understanding HTML Grade Encoding: URL Encoded and Multipart Forms
- RESTful API Tutorial: How to Upload Files to a Server
- RFC7578: Returning Values from Forms: multipart/class-information
Alamofire
- How to parse JSON response from Alamofire API in Swift?
- Ship POST parameters with MultipartFormData using Alamofire, in iOS Swift
- Alamofire Multipart with parameters : upload Image from UIImagePickerController Swift
- Alamofire 5 Tutorial for iOS
- Alamofire References
- Alamofire Documentation
- Alamofire MultipartFormData
- Alamofire Uploading MultipartFormData
Prepared for EECS 441 past Ollie Elmgren, Wendan Jiang, Benjamin Brengman, Tianyi Zhao, Alexander Wu, Yibo Pi, and Sugih Jamin | Terminal updated: April 7th, 2022 |
Source: https://eecs441.eecs.umich.edu/asns/lab1-swiftImages.html