Written by: Keith Mitchell
At the end of last year, we received a request to produce some digital media to record the new occupational therapy spaces at Glenside. This would form part of the approval process for the accreditation of new and existing Occupational Therapy programmes. Due to the ongoing pandemic restrictions, it was not possible to take staff from the RCOT on a guided tour of the spaces. So, we were given the task of creating a virtual tour. We would also acquire some library photos and video footage at the same time. Students would not feature in the images as the RCOT could then see the spaces more clearly. It would also enable them to scope potential uses better. This cut the need for filling out release forms and saved us some preparation time. Getting students posed in position for 360 images would be challenging even without Covid restrictions in place. Photographing an empty room, however, would simply require setting up the spaces and doing some ‘visual gardening’. Teaching staff made sure the room was prepared the day before and the following day I was able to get to onto Glenside campus and start work in isolation. With only a few hours to get the recording work done, I started with an initial walkthrough to determine how to connect the spaces in a video. The next step was a walk-through whilst recording video footage with a gimbal-mounted video camera followed by photographing the individual rooms. With the library material acquired, it was time to move onto the 360 aspect of the project. It would not be necessary to create individual 360 video clips of each room as there would be nothing moving in the scene. So static 360 photos would be sufficient.
We currently have two 360 cameras available for use and both record video and still images. One is an older Richo Theta S model and the other a Vuze 360 3D 4K camera from HumanEyes. The later has 4 pairs of lenses and can record in 3D and 4K. The downside of this is that the video files are very large and require a relatively powerful PC to edit the material. The still image quality did not appear to be substantially better than the Theta S either. Both cameras can be controlled over Wi-Fi using a phone app. In my initial testing of the two cameras, I had problems getting the Vuze phone app to work properly with my Android phone. For this reason and limited testing time I ended up using the Theta S. The phone app serves several functions. It can be used as an external monitor as the camera has no screen. You can then view images in real-time whilst setting up the shot. Recorded images are stored in the camera but can also be transferred to the phone over Wi-Fi for viewing and as a backup. You can then download the files from either the camera or the phone. The other main uses for the app are to adjust the camera settings and as a remote shutter. From the settings, you can select between full manual mode, aperture priority or shutter priority, or full auto. For this time-sensitive job, the auto mode gave the quickest and best all-round settings. Additionally, using the High Dynamic Range (HDR) setting improved the overall exposure and colour. Had there been enough time it would have been preferable to shoot in manual mode using the bracketing method. With this method of shooting, you take multiple images, exposing separately for highlights, mid-tones and shadows and then combine them all in Photoshop into one single image. You can create an image with an HDR look and have much better control over the image exposure. The greater the number of separate exposures the better the final image will be. However, it does take a bit longer to set up. F Block rooms don’t have a lot of windows, so I was able to get away with the camera set to auto HDR.
The camera needs to be mounted on a tripod or monopod with as small a footprint as possible. If tripod legs are fully opened, then it becomes very difficult to remove from them from the final image. I used a light stand as they have small legs and a threaded mount for the camera. A challenge when photographing with 360 cameras is that everything within the 360 views will be seen. So, anything that you don’t want to appear in the final image, including yourself, must be removed from the room. Once the camera is set in place you remove yourself from the room and use the app to remotely trigger the camera. With all the rooms captured, the next part of the process was to load the images into a PC workstation.
This is a simple process of connecting the camera via USB and copying the files to a working folder. I then went through the process of identifying and labelling the best images using Adobe Bridge. This would have taken considerably longer had all the positions been shot bracketed with 3 to 9 images for each position in the room. In some rooms, the camera was placed centrally, but with the larger rooms, they were photographed from 2 or 3 different positions. On reflection, the rooms shot from their centres are not as appealing as when the camera is positioned off-centre. Once all the images were selected and labelled, they were brought into Photoshop one-by-one to paint out the legs of the lighting stand. This is a simple process using a clone brush but was made easier because of the plain floor. Some basic exposure adjustment and colour correction and a little sharpening were all that was required to complete the images.
To create the tour, I used Kuula.co to host the images. Unfortunately, at the time we did not have a license for the Pro version. The pro version can add clickable hot spots onto the images. These hotspots are then used to link to other images thus enabling the viewer to move between and around rooms. The free version only allows hosting of images in a tour folder. Once the images have been uploaded to a tour folder you can generate a simple URL link or an embed code if you need to embed into your own site. The tour is available to browse.
As a final experiment, I had a look at importing 360 jpeg images directly into Adobe Premiere Pro and creating a simple mp4 VR video compilation from images. It is possible to do this, but the rendering produces an unpleasant stitching glitch which is not viewable until after the file has been output from Premier. Also, the 360 metadata is lost and so this must be added back to the movie file. To do this I used a simple programme called Spatial Media Metadata Injector. To be honest, if you wanted this type of video it would be easier to shoot 360 videos and trim each shot to the desired length. Ricoh provides an app for the Theta V that converts the spherical video to something that can be imported into Premiere Pro. You can then do some trimming and add transitions and titles.
Hosting 360 Assets at UWE
We have since investigated the possibility of hosting 360 material within UWEs core tools. Principally, on OneDrive for Business and Panopto.
Panopto is a video only platform so it cannot be used for 360 images. 360 videos should be uploaded in the normal way but must be identified as VR. To do so, you first hover over the file to show the options and select ‘Edit’. From the edit page you then select ‘Streams’. Hovering over this filename you select ‘Edit’ from the options that reveal under the 3 dots. The edit stream options appear where you can select the type of VR material. Either 180-degree or 360-degree. After saving the changes you should be able to manually move around the image.
However, right now this does not currently(29/1/21) work. Whilst Panopto is supposed to be able to host 360 videos, there is currently a bug in the system preventing playback within the Panopto player. This is a known problem that Panopto are working to resolve. The only work-around just now, is to copy the embed code from the Share tab and paste into your own web page. Alternatively, if you paste the code into a notepad file and name the file either default.htm or index.htm, you can test the file in a browser. Example.
360 images will play in OneDrive for Business (but not the personal One Drive app) after appending the filename with [filename].360.jpg or [filename].360.jpeg. Example.
Both systems are a solution but involve workarounds and the controls are quite severe(unforgiving). So it remains to be seen whether they will work well as hosts for use with a headset.