Aug
6
2019

How to use newly announced technologies to create incredible apps :- You can create an incredible app using iOS 13. iOS13 announced at WWDC in June, iOS 13 is Apple’s next operating system for iPhones and iPads. Features include a Dark Mode, a Find My app, a revamped Photos app, new Siri voice, updated privacy features, new street-level view for Maps, and more.
Latest Features :-
1. Core ML 3
2. The Vision framework
3. New in Siri
4. Sign In with Apple option
5. Dark Mode
6. PencilKit

1. Core ML 3 :-
  • Apple introduced Core ML 3, the latest iteration of its machine learning model framework for iOS developers bringing machine intelligence to smartphone apps. Core ML 3 will for the first time be able to provide training for on-device machine learning to deliver personalized experiences with iOS apps. The ability to train multiple models with different data sets will also be part of a new Create ML app on macOS for applications like object detection and identifying sounds.
  • Apple’s machine learning framework will be able to support more than 100 model layer types.
  • New layers, more possibilities In addition to on-device training, Core ML 3 also brings support for a number of new architectures, layer types, and operations that open the door for complex models and use cases. These updates aren’t always flashy, but they make a huge difference in the framework’s utility.


    New models introduced in Core ML 3 include

    • Nearest neighbors proto :-Nearest neighbors photo classifiers (kNN) are simple, efficient models for labeling and clustering that work great for model personalization.
    • Item Similarity Recommender proto :-Tree-based models that take a list of items and scores to predict similarity scores between all items that can be used to make recommendations.
    • Sound Analysis preprocessing proto :-Built-in operations to perform common pre-processing tasks on audio. For example, transforming waveforms into the frequency domain.
    • Linked Models photo :-Shared backbones and feature extractors that can be reused across multiple models.These new models enable use cases beyond computer vision and provide additional benefits for developers using multiple models in the same application


    Implications
    This release marks a major step forward for machine learning in the Apple ecosystem, and there are a number of implications for developers

    • Core ML is ready to move beyond computer vision. Image-related tasks have dominated deep learning, and specifically mobile deep learning, for a few years now. Support for audio-preprocessing, generic recommender models, and complex operations required for this year’s crop of NLP models promises to change that. Developers should start thinking about ML-powered experiences for users that go beyond the camera.
    • On-device training will demand new UX and design patterns. How much data is needed to personalize a model with sufficient accuracy? What’s the best way to solicit training data from users? How often should this be done? As ML moves closer and closer to core application logic, developers need to think about how these features are communicated to users.
    • Personalized models will need persistence and syncing. Training data for personalized models will remain on-device, but what about the model itself may need to be stored elsewhere? If a user deletes then re-installs an app or wants to use the same app on multiple devices, their personalization should go with them. Developers will need systems to back up and sync models.
    • It’s now possible to do end-to-end machine learning and skip Python. Python has been the preferred programming language of ML engineers for nearly a decade now. With the ability to train models, Core ML + Swift is now a viable alternative for some projects. Will mobile developers skip Python entirely and opt for a language they already know? Time will tell.

2. The Vision Framework :-

    • Vision is a new, powerful, and easy-to-use framework that provides solutions to computer vision challenges through a consistent interface. Understand how to use the Vision API to detect faces, compute facial landmarks, track objects, and more. Learn how to take things even further by providing custom machine learning models for Vision tasks using CoreML.
    • a framework to apply high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video
    • And much more new frameworks and improvements to exist frameworks.


      Vision framework allows you to :-

      • Detect face rectangle and face landmarks (face contour, median line, eyes, brows, nose, lips, pupils position)
      • Find projected rectangular regions surface.
      • Find and recognizes barcodes.
      • Find regions of visible text.
      • Determine the horizon angle in an image.
      • Detect transforms needed to align the content of two images.
      • Process images with Core ML model.
      • Track movement of a previously identified arbitrary object across multiple images or video frames.

      There are 3 base class categories :-

      • VNRequest and derived classes :- describe your analysis request. It has request completion handler and array of results.
      • VNObservation and derived classes :- describe an analysis result.
      • VNImageRequestHandler, VNSequenceRequestHandler :- processes one or more VNRequest on given image.

      What You Can Do with Vision :-

      • Face Detection
      • Face Detection: Small Faces
      • Face Detection: Strong Profiles
      • Face Detection: Partially Occluded
      • Face Detection: Hats and Glasses
      • Face Landmarks
      • Image Registration
      • Rectangle Detection
      • Barcode Detection
      • Text Detection
      • Object Tracking [For faces, rectangles, and general templates]

      Summary

      • Vision is a new high-level framework for Computer Vision
      • Various detectors and tracking through one consistent interface
      • Integration with Core ML allows you to use custom models with ease

3. New in Siri :-

  • Siri is getting a new voice in iOS 13, Apple announced on stage at WWDC 2019, with the company employing new “Neural text to speech” technology to make the virtual assistant sound much more natural.
  • Unlike the old version of Siri, the iOS 13 voice is entirely generated by software, instead of using audio clips from voice actors. In a brief demo shown on stage, the new voice does seem to do a better job at actually pronouncing words, especially ones that are more complicated (like “thermodynamics”). The new voice is also better at longer sentences, stressing syllables more accurately than the older version.
  • The received wisdom is that Siri lags behind Amazon Alexa and Google Home, but received wisdom has a problem. Whatever it’s about, the opinion tends to be quickly formed and it takes a very long time to change it. So right now Siri has zoomed ahead with the advances in iOS 13 and iPadOS but it’s going to take time for that to really register.
  • That’s partly because it’s going to be months before we all have the final versions on our iPhones and iPads. It’s also because it’s then going to take us time to really experience the differences.
  • It’s especially so because some of those differences are a direct result of improvements to Siri Shortcuts and that’s nowhere near as main Stream as Alexa is and it’s also the case that Siri can go further, that there are things we’d like it to change, areas we’d love it to improve in. Yet while this may be us reading too much into it, some of this year’s improvements even lay the groundwork for these areas.

4. Sign In with Apple option :-

  • Apple’s new sign-in feature brings a secure way to log in to your iOS 13 apps
  • At WWDC 2019 Apple is continuing to make its case and push for stronger privacy features. To make it simpler and more secure for iOS 13 users to sign in to apps, Apple is launching a new sign-in button called “Sign in with Apple.” The tool works like similar social sign-in buttons — like those that allow users to log into third-party apps with either their Google or Facebook ID — but adds Apple’s twist with a focus on privacy and security.
  • Most apps and services today require users to either create a user profile or log in with a social ID to deliver a unique, customized experience. The former requires comes with a lot of friction, as it requires you to enter a lot of information, while the latter is convenient but could reveal a lot about you.“Now, this can be convenient, but it also can come at the cost of your privacy, your personal information sometimes gets shared behind the scenes and these logins can be used to track you,” Apple Senior Vice President of Software Engineering Craig Federighi said of competing for social sign-in options during Apple’s keynote address. “So we wanted to solve this, and many developers do, too. And so now we have the solution. It’s called sign in.
  • ”The Apple sign-in button allows iOS users to sign in to apps, like ride-sharing apps, with their Apple ID but without all the tracking or having to reveal personal information. Apple will provide developers with the sign-in APIs to build into their app.
  • When apps require your email address or name, you can choose how you want to share this information with developers. In a demo using Bird’s scooter rental app, Federighi showed that users can either choose to share their email address with the developer or choose an option that will create a randomized email address that will relay the message to your Apple iCloud email address to protect your security.“And that’s good news because we give each app, a unique random address, and this means you can disable any one of them and anytime and when you’re tired of hearing from that app,” he said.
  • Though the experience was focused on iOS, Apple will bring its sign-in button to all of its platforms and on the web, Federighi said, so you’ll have more control over the personal information you share with websites and developers when you log in to apps and services.

5. Dark Side of the Mode :-

  • Dark mode! It’s here! It makes things dark! Or rather, it makes the overall OS and Apple’s preinstalled apps dark since third-party apps aren’t getting updated yet with the new Dark Mode API that will let them flip, too. In addition to apps getting proper black backgrounds (which should help OLED battery life a bit, at least in theory), the dark theme applies to the glass textures for the dock and notifications, too.
  • Dark mode is the biggest cosmetic change to iOS here, and it looks pretty nice. The new theme is comprehensive across almost all of Apple’s apps, although there are some holdouts, like the Apple Store app and the iWork suite that haven’t gotten support yet.
  • It’s not just a toggle, though. Apple will let you automatically schedule when it turns on or off, too, or set it to coincide with sunrise and sunset, in addition to the planned Control Centre toggle that will give users manual control (say, for when you’re in a dark room and don’t want to blind yourself).
  •  

    Dark Side of the Mode in

    1) Photo
    2) Apple Map
    3) Reminder

    1) Photo:-
      • One of the more surprising changes here is the new Photos app, which adds a new view mode that sorts pictures automatically by day, month, or year. It’s a little complicated at first, and in my testing, the algorithmic sorting didn’t work great, although that may be due to the limited number of photos on our test account. It’s a good idea in theory, though, meant to help highlight your best pictures from overtime.
      • Far more important, though, are the new editing tools, which are surprisingly powerful. You’re able to edit brilliance, highlights, shadows, contrast, saturation, white balance, sharpness, definition, vignette, and noise reduction on pictures. The UI is simple: it’s just a scrolling wheel of the different photo modes to edit, with a slider to adjust the effect, but the results are impressive, and more importantly, the new app is super fast.
      • Pro users will still want to stick with more comprehensive solutions like Lightroom, but for most people, the new tools should be more than enough for tweaking a picture before sending it to Instagram or Facebook. And based on what’s here so far, some of the more basic photo editing tools on the App Store are definitely in danger of getting Sherlocked.

    2) Apple Map :-
      • Apple Maps gets a lot of flack from its disastrous launch and the fact that Google Maps has been so much better for so long. But Apple’s hoping to change that reputation in iOS 13, which is getting a whole new Maps app.
      • First impressions are surprisingly good: the new maps have way more data than the old ones; the quick row of favorites icons for home, work, and your favorite stores or restaurants is useful; and there’s a collection feature that lets you group places and shares them — like, say, if you want to plan a vacation or share your favorite spots to eat with a friend. I’ve already put together lists of my favorite bars and restaurants in the city, and it’s the sort of thing I can actually see myself using going forward.
      • Apple’s also made some big speed improvements here, especially when panning around the map and pulling up directions. Compared to the increasingly feature-bloated Google Maps, the leaner, more focused Apple app is actually kind of nice.
      • Other parts of the update are Apple just getting to par, with additions like real-time public transit estimates and a new, Google Street View-style mode that are just table stakes.
      • Is all this enough to overcome Google Maps’ huge lead? We’ll have to wait and see, but for the first time, I can imagine considering Apple Maps as an option, which feels like a win on its own.


    3) Reminder :-

      • Apple’s old Reminders app was bare-bones to the extreme. Its only prominence was due to the privileged position as Apple’s default option. The new one is a lot more full-featured, ranging from basic additions (you can now filter reminders from across your lists by what’s due today, what’s flagged, and what’s scheduled) to the more complex (a new quick menu bar that pops up over the keyboard to add things like geofences, times, or pictures to reminders).
      • There’s also a heavy dose of Siri added in, for a Fantastical like detection of what you’re typing (i.e., type “Remind me to feed the fish every afternoon” and you’ll get a prompt to turn that into a daily reminder.) And in a neat trick, you can tag your contacts in your reminders, so you’ll be pinged the next time you try to message them over in iMessage in a cool cross-app functionality.
      • The new Reminders app still isn’t the best around, and dedicated GTD software like Things or Any.do will still serve better if you need more functionality. But given its placement as the default option for Siri and deep iOS and macOS integration, it’s good to see that Apple’s put some effort into making improvements. And with those inherent advantages, the rest of the app being merely okay might be good enough.

6. PencilKit :-

  • Capture touch input as an opaque drawing and display that content from your app.
  • PencilKit makes it easy to incorporate hand-drawn content into your iOS or macOS apps quickly and easily. PencilKit provides a drawing environment for your iOS app that takes input from Apple Pencil, or the user’s finger, and turns it into high-quality images you display in either iOS or macOS. The environment comes with tools for creating, erasing, and selecting lines.
  • You capture content in your iOS app using a PKCanvasView object. The canvas object is a view that you integrate into your existing view hierarchy. It supports the low-latency capture of touches originating from Apple Pencil or your finger. It then vends the final results as a PKDrawingReferences object, whose contents you can save with your app’s content. You can also convert the drawn content into an image, which you can display in your iOS or macOS app.

  • Pencil Interactions :-

    • Apple introduced PencilKit at WWDC 2019 this week for much easier implementation of Apple Pencil experiences in third-party apps. The new framework will allow devs to tap into the same low latency and the new Apple Pencil tool palette and “markup anywhere” features that Apple itself uses for drawing and annotating in its own apps.
    • Handle double-tap interactions that a user makes on Apple Pencil.
    • Pencil interactions let your app detect double taps the user makes on their Apple Pencil. Supporting Pencil interactions in your app gives the user a quick way to perform an action such as switching between drawing tools by simply double-tapping their Apple Pencil.

    Lower latency with PencilKit :-

    • Apple is already using PencilKit across the entire system in iOS 13 including in Notes for low latency drawing and note-taking, in Pages for marking up documents, and with its “markup anywhere” feature for annotating screenshots and PDFs.
    • The new APIs require just three lines of code for developers to get the same low latency, UI, and tool palette that Apple uses for Pencil. That includes the drop from 20 milliseconds to 9 milliseconds latency that Apple announced during its unveiling of iPadOS.

    New Dynamic Tool Picker & Expressive Inks

    • A big part of what developers get access to with the APIs is Apple’s canvas and new dynamic tool picker (pictured above) with pen, marker, pencil, eraser, and lasso tools. That includes Apple’s expressive, responsive inks and drawing models that it uses in apps like Notes and Pages.
    • Some of this new feature set was shown briefly by Apple on stage to demo new and improved “markup anywhere” features for annotating screenshots and PDFs – now integrated system-wide with support for editing full documents and more. With the introduction of PencilKit, developers will be able to much more easily offer users access to the markup controls to draw or annotate in third-party apps too, even for apps that might not use Apple Pencil as the main input device.
    • Developers interested in learning more can head over to Apple’s website where it has sample code for the new PencilKit APIs.