Commentary & Reflections from the TikTok Methodologies Discussion Forum

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Abstract As methodologies for studying TikTok continue to develop, deliberation among research communities can offer valuable insights to incorporate different approaches and iterate on emerging research designs. In July 2021, the TikTok Cultures Research Network held a public research forum on TikTok methodologies that was attended by over two-hundred researchers and members of the public from around the world. The discussion-based event featured two panels and a vibrant open question and answer session that highlighted many of the complex issues that TikTok researchers are facing. In this short piece, the two discussion moderators and one of the event’s organisers reflect on some of the unanswered questions from our forum. Topics that were brought up by researchers in the audience included ethical considerations to protect research participants, particularly younger TikTok users, as well for researchers, particularly when studying problematic or extreme content. Audience members raised technical questions about how to collect and store large quantities of video data or extract and analyse qualitative elements such as video text or hashtags. Audience members also raised questions about how to avoid biases when studying a platform that caters experiences to users’ individual cultural preferences.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.32631/pb.2024.1.07
Tracking illegal activities using video surveillance systems: a review of the current state of research
  • Mar 29, 2024
  • Law and Safety
  • D O Zhadan + 2 more

The current state of research on the use of the neural networks under martial law to identify offenders committing illegal acts, prevent acts of terrorism, combat sabotage groups in cities, track weapons and control traffic is considered. The methods of detecting illegal actions, weapons, face recognition and traffic violations using video surveillance cameras are analysed. It is proposed to introduce the studied methods into the work of “smart” video surveillance systems in Ukrainian settlements. The most effective means of reducing the number of offences is the inevitability of legal liability for offences, so many efforts in law enforcement are aimed at preventing offences. Along with public order policing by patrol police, video surveillance is an effective way to prevent illegal activities in society. Increasing the coverage area of cameras and their number helps to ensure public safety in the area where they are used. However, an increase in the number of cameras creates another problem which is the large amount of video data that needs to be processed. To solve the problem of video data processing, various methods are used, the most modern of which is the use of artificial intelligence to filter a large amount of data from video cameras and the application of various video processing algorithms. The ability to simultaneously process video data from many CCTV cameras without human intervention not only contributes to public safety, but also improves the work of patrol police. The introduction of smart video surveillance systems allows monitoring the situation in public places around the clock, even if there is no police presence in the area. In the reviewed studies of video surveillance systems, neural networks, in particular MobileNet V2, YOLO, mYOLOv4-tiny, are used to track illegal actions, criminals and weapons, which are trained on large amounts of video and photo data. It has been found that although neural networks used to require a lot of computing power, they can now be used in IoT systems and smartphones, and this contributes to the fact that more video surveillance devices can be used to monitor the situation.

  • Conference Article
  • Cite Count Icon 5
  • 10.1117/12.2067568
Detection and tracking of humans from an airborne platform
  • Oct 7, 2014
  • Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
  • Adam W M Van Eekeren + 2 more

Airborne platforms are recording large amounts of video data. Extracting the events which are needed to see is a timedemanding task for analysts. The reason for this is that the sensors record hours of video data in which only a fraction of the footage contains events of interest. For the analyst, it is hard to retrieve such events from the large amounts of video data by hand. A way to extract information more automatically from the data is to detect all humans within the scene. This can be done in a real-time scenario (both on-board as on the ground station) for strategic and tactical purposes and in an offline scenario where the information is analyzed after recording to acquire intelligence (e.g. a daily life pattern). In this paper, we evaluate three different methods for object detection from a moving airborne platform. The first one is a static person detection algorithm. The main advantage of this method is that it can be used on single frames, and therefor does not depend on the stabilization of the platform. The main disadvantage of this method is that the number of pixels needed for the detection is pretty large. The second method is based on detection of motion-in-motion. Here the background is stabilized, and clusters of pixels that move with respect to this stabilized background are detected as moving object. The main advantage is that all moving objects are detected, the main disadvantage is that it heavily depends on the quality of the stabilization. The third method combines both previous detection methods. The detections are tracked using a histogram-based tracker, so that missed detections can be filled in and a trajectory of all objects can be determined. We demonstrate the tracking performance using the three different detections methods on the publicly available UCF-ARG aerial dataset. The performance is evaluated for two human actions (running and digging) and varying object sizes. It is shown that a combined detection approach (static person detection and motion-inmotion detection) gives better tracking results for both human actions than using one of the detectors alone. Furthermore it can be concluded that the minimal height of humans must be 20 pixels to guarantee a good tracking performance.

  • Conference Article
  • Cite Count Icon 30
  • 10.1145/2578726.2578775
Collecting and Annotating Human Activities in Web Videos
  • Apr 1, 2014
  • Fabian Caba Heilbron + 1 more

Recent efforts in computer vision tackle the problem of human activity understanding in video sequences. Traditionally, these algorithms require annotated video data to learn models. In this paper, we introduce a novel data collection framework, to take advantage of the large amount of video data available on the web. We use this new framework to retrieve videos of human activities in order to build datasets for training and evaluating computer vision algorithms. We rely on Amazon Mechanical Turk workers to obtain high accuracy annotations. An agglomerative clustering technique brings the possibility to achieve reliable and consistent annotations for temporal localization of human activities in videos. Using two different datasets, Olympics Sports and our novel Daily Human Activities dataset, we show that our collection/annotation framework achieves robust annotations for human activities in large amount of video data.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/j.optlastec.2018.08.051
Reduction of bubble-like frames using a RSS filter in wireless capsule endoscopy video
  • Sep 5, 2018
  • Optics & Laser Technology
  • Qian Wang + 5 more

Reduction of bubble-like frames using a RSS filter in wireless capsule endoscopy video

  • Dissertation
  • 10.17918/etd-3908
Automated categorization of Drosophila learning and memory behaviors using video analysis
  • Jul 16, 2021
  • Md Alimoor Reza + 1 more

The ability to study learning and memory behavior in living organisms has significantly increased our understanding of what genes affect this behavior, allowing for the rational design of therapeutics in diseases that affect cognition. The fruit fly, Drosophila melanogaster, is a well established model organism used to study the mechanisms of both learning and memory in vivo. The techniques used to assess this behavior in flies, while powerful, suffer from a lack of speed and quantification. The technical goal of this project is to create an automated method for characterizing this behavior in fruit flies by analyzing video of their movements. A method is developed to replace and improve a labor-intensive, subjective evaluation process with one that is automated, consistent and reproducible; thus allowing for robust, high-throughput analysis of large quantities of video data. The method includes identifying individual flies in a video, quantifying their size (which is correlated with their gender), and tracking their motion. Once the flies are identified and tracked, various geometric measures may be computed, for example distance between flies, their relative orientation, velocities and percentage of time the flies are in contact with each other. This data is computed for numerous experimental videos and produces high-dimensional feature vectors that quantify the behavior of the flies. Clustering techniques, e.g., k-means clustering, may then be applied to the feature vectors in order to computationally group each specimen by genotype. Our results show that we are able to automatically differentiate between normal and defective flies. We also generated a Computed Courtship Index (CCI), a computational equivalent of the existing Courtship Index (CI), and compared CCI with CI. These results demonstrate that our automated analysis provides a numerical scoring of fly behavior that is similar to the scoring produced by human observers.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/itsc.2010.5625127
Vehicle parameterization and tracking from traffic videos
  • Sep 1, 2010
  • Anh Vu + 2 more

The popularity of surveillance cameras used in traffic management systems have produced large quantities of video data that cannot be processed easily by humans. We present a method used on high resolution traffic surveillance videos to track and estimate vehicles' state when the cameras are mounted on moderate height structures typically less than 10 meters. This tracking method enables a number of applications and does not have the infrastructure requirements of other vehicle tracking methods. The method requires that both the internal and external camera parameters are calibrated and that vehicles move on a ground plane. Each of the vehicles using this tracking process is parameterized as a rectangular cuboid with dimensions (length, width, and height) and state (position and attitude) reflecting that of the vehicle. From a traffic video stream, visible features on the surface of a vehicle are selected and tracked. A particle filter is used to infer the vehicle's state as the vehicle moves through the camera's field of view. In this paper, we present the method, as well as results from real and simulated data, which demonstrate robust tracking and state estimation for a variety of vehicle types.

  • Research Article
  • 10.4028/www.scientific.net/amr.591-593.2358
Innovation on “Machinery and Equipment” Classroom Teaching for the Cultivation of Excellent Engineers
  • Nov 1, 2012
  • Advanced Materials Research
  • Wei Yan Shang + 2 more

In order to cultivate excellent engineers for our college, innovation on the class teaching of “machinery and equipment” has been done. To improve the ability of engineering application and innovation of the students, four innovation method has been taken. Firstly, mechanical design competitions has been combined into classroom teaching. Another innovation is arranging the curriculum design in the start of the semester. And to make students participate into teaching activities positively, advanced production equipments or students’ interesting equipments has been selected for students to make lectures in class. At last, large quantities of video data has been led into classroom. These innovations strengthen the convergence of the theoretical teaching and practical teaching, and students’ enthusiasm on their professional courses will be exited , thus the innovation will contribute to our cultivation of excellent engineers.

  • Research Article
  • Cite Count Icon 71
  • 10.1016/j.dsr.2014.07.007
Predicting the distribution of deep-sea vulnerable marine ecosystems using high-resolution data: Considerations and novel approaches
  • Aug 2, 2014
  • Deep Sea Research Part I: Oceanographic Research Papers
  • Anna M Rengstorf + 4 more

Predicting the distribution of deep-sea vulnerable marine ecosystems using high-resolution data: Considerations and novel approaches

  • Book Chapter
  • Cite Count Icon 3
  • 10.1002/9780470061626.shm138
Video Landing Parameter Surveys
  • Jan 26, 2008
  • Encyclopedia of Structural Health Monitoring
  • Thomas Defiore + 1 more

In an effort to better understand and document the actual operational environment of jet aircraft landing impact conditions, the US Navy, and, later, the Federal Aviation Administration (FAA) initiated a series of aircraft video landing parameter surveys at high-activity airfields, both civil and military, and on aircraft carriers. By collecting and analyzing large quantities of video data for a wide variety of aircraft, the original design criteria and fatigue-life estimates for aircraft landing gear and support structures can be assessed and verified. Video landing parameter surveys are a joint research effort of the US Navy and FAA to acquire and evaluate aircraft landing impact conditions. Field research teams temporarily install video camera(s) on the apron of the runway or catwalk of an aircraft carrier. These cameras record high-resolution video images of an aircraft's touchdown event. The images are subsequently digitized and analyzed along with individual model airplane geometry to calculate at touchdown: sink speed, ground/air speed, pitch, roll, yaw, distance from threshold (or carrier ramp), and other parameters describing the touchdown event. Reports containing processed results are published and most of them are available in the public domain. Video surveys have been conducted on 8 commercial airfields, 6 military airfields, and over 11 aircraft carriers. These surveys are the only research of its kind in the international aviation industry. The objective of US Navy surveys is to validate the MIL-A-8863 sink-speed fatigue spectrum. FAA landing parameter surveys assess the continued suitability requirements in 14 CFR Part 25.473 for a limit descent velocity of 10 ft s−1 at touchdown and the sink-speed fatigue spectrum previously used by airframe manufacturers, NASA TN D 4529. Keywords: video; surveys; parameters; landing; sink-speed; fatigue; spectrum; transports; design; certification

  • Research Article
  • Cite Count Icon 46
  • 10.1016/j.imavis.2008.04.021
Automated detection of unusual events on stairs
  • May 11, 2008
  • Image and Vision Computing
  • Jasper Snoek + 4 more

Automated detection of unusual events on stairs

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-642-33191-6_23
Video Analysis Algorithms for Automated Categorization of Fly Behaviors
  • Jan 1, 2012
  • Md Alimoor Reza + 5 more

The fruit fly, Drosophila melanogaster, is a well established model organism used to study the mechanisms of both learning and memory in vivo. This paper presents video analysis algorithms that generate data that may be used to categorize fly behaviors. The algorithms aim to replace and improve a labor-intensive, subjective evaluation process with one that is automated, consistent and reproducible; thus allowing for robust, high-throughput analysis of large quantities of video data. The method includes tracking the flies, computing geometric measures, constructing feature vectors, and grouping the specimens using clustering techniques. We also generated a Computed Courtship Index (CCI), a computational equivalent of the existing Courtship Index (CI). The results demonstrate that our automated analysis provides a numerical scoring of fly behavior that is similar to the scoring produced by human observers. They also show that we are able to automatically differentiate between normal and defective flies via analysis of their videotaped movements.KeywordsFeature VectorCourtship BehaviorVideo SegmentWhite RegionHead DirectionThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

  • Book Chapter
  • Cite Count Icon 7
  • 10.1007/978-3-540-89796-5_39
An Efficient Method for Near-Duplicate Video Detection
  • Jan 1, 2008
  • Bashar Tahayna + 1 more

In order to monitor video streams in real-time or search large collections of video documents, several solutions based on near-duplicate video detection have been proposed in the literature. We present in this paper an architecture based on signature-based index structures coupling visual and temporal features and on an N-gram matching and scoring framework. The techniques we cover are robust and insensitive to general video editing and/or degradation, making it ideal for re-broadcasted video search. Through the use of signature-based indexing and N-gram matching and scoring, we identify corresponding query and index contents accurately in order to detect near-duplicate videos, even when these contents constitute only a small section of the videos being compared. Experiments are carried out on large quantities of video data collected from the TRECVID 02, 03 and 04 collections and real-world video broadcasts recorded from two German TV stations. An empirical comparison over two state-of-the-art dynamic programming techniques is encouraging and demonstrates the advantage and feasibility of our method.

  • Book Chapter
  • Cite Count Icon 50
  • 10.1007/978-3-319-46478-7_43
Sympathy for the Details: Dense Trajectories and Hybrid Classification Architectures for Action Recognition
  • Jan 1, 2016
  • César Roberto De Souza + 3 more

Action recognition in videos is a challenging task due to the complexity of the spatio-temporal patterns to model and the difficulty to acquire and learn on large quantities of video data. Deep learning, although a breakthrough for image classification and showing promise for videos, has still not clearly superseded action recognition methods using hand-crafted features, even when training on massive datasets. In this paper, we introduce hybrid video classification architectures based on carefully designed unsupervised representations of hand-crafted spatio-temporal features classified by supervised deep networks. As we show in our experiments on five popular benchmarks for action recognition, our hybrid model combines the best of both worlds: it is data efficient (trained on 150 to 10000 short clips) and yet improves significantly on the state of the art, including recent deep models trained on millions of manually labelled images and videos.

  • Conference Article
  • 10.1117/12.189129
<title>Computerized Structure Clearance Measurement System (CSCMS)</title>
  • Oct 6, 1994
  • Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
  • Richard Taylor + 3 more

Traditional surveying techniques and the use of mechanical structures mounted on rolling stock are the current methods for measuring clearance around Queensland railway lines. A new method, described in this paper, is being developed for Queensland Rail by a consortium of three Brisbane companies. The project involves the merging of two technologies, both of which are themselves evolving rapidly. The first of these is Digital Photogrammetry which provides 3D information through the processing of stereo images. The second is the capture of digital images and the pre-processing and transmission of large quantities of video data in an industrial environment. The result is a Computerized Structure Clearance Measurement System which allows operators to make accurate measurements with reference to a clearance gauge profile.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.ipm.2011.03.003
Near-duplicate video detection featuring coupled temporal and perceptual visual structures and logical inference based matching
  • Apr 20, 2011
  • Information Processing & Management
  • Mohammed Belkhatir + 1 more

Near-duplicate video detection featuring coupled temporal and perceptual visual structures and logical inference based matching

Save Icon
Up Arrow
Open/Close
Notes

Save Important notes in documents

Highlight text to save as a note, or write notes directly

You can also access these Documents in Paperpal, our AI writing tool

Powered by our AI Writing Assistant